From 3a4275d12fd419c67d55f980b4398c1b9f2b8687 Mon Sep 17 00:00:00 2001 From: nullhack Date: Fri, 8 May 2026 04:55:50 -0400 Subject: [PATCH 1/3] feat: adopt cex-mm doc consolidation, 3-skill discovery, feature discovery improvements MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Consolidated spec documents: - Removed event_storming.md, context_map.md, technical_design.md templates - Added Events/Commands and Context Map sections to domain_model template - Added Technology Stack and Dependencies to product_definition template - Merged content from deleted docs into domain_model and product_definition Discovery flow improvements: - Merged event-storming + language-definition + domain-modeling into single domain-discovery state with 3 skills: facilitate-event-storming, domain-discovery, define-ubiquitous-language - Split feature-discovery into 2 skills: discover-features (boundaries) and discover-rules (rule derivation) - Enriched event-storming.md knowledge with Brandolini's 6 phases - Created domain-modeling.md knowledge (formalization) - Enriched ubiquitous-language.md with detection heuristics Feature discovery improvements: - Created feature-boundaries.md knowledge (Patton story mapping, context alignment, splitting criteria) - Created rule-derivation.md knowledge (Event→Rule, Invariant→Rule, Command→Rule, Quality Attribute→Constraint patterns) - Enriched feature-discovery.md with Content section and gap analysis Architecture flow: merged adr-draft into technical-design state Naming conventions: , , , , Skill improvements: all 41 skills updated to read all in artifacts before starting work; 10 skills with procedural improvements Removed cex-mm-specific Exchange Adapter Fixtures section from AGENTS.md --- .flowr/flows/architecture-flow.mermaid | 3 + .flowr/flows/architecture-flow.yaml | 68 ++---- .flowr/flows/branding-flow.mermaid | 3 + .flowr/flows/delivery-flow.mermaid | 3 + .flowr/flows/delivery-flow.yaml | 9 +- .flowr/flows/development-flow.mermaid | 3 + .flowr/flows/development-flow.yaml | 14 +- .flowr/flows/discovery-flow.mermaid | 3 + .flowr/flows/discovery-flow.yaml | 67 ++---- .flowr/flows/document-dependencies.yaml | 201 ------------------ .flowr/flows/feature-development-flow.mermaid | 3 + .flowr/flows/feature-development-flow.yaml | 2 +- .flowr/flows/main-flow.mermaid | 3 + .flowr/flows/planning-flow.mermaid | 3 + .flowr/flows/planning-flow.yaml | 26 +-- .flowr/flows/post-mortem-flow.mermaid | 3 + .flowr/flows/post-mortem-flow.yaml | 10 +- .flowr/flows/review-gate-flow.mermaid | 3 + .flowr/flows/review-gate-flow.yaml | 12 +- .flowr/flows/setup-project-flow.mermaid | 3 + .flowr/flows/tdd-cycle-flow.mermaid | 3 + .flowr/flows/tdd-cycle-flow.yaml | 13 +- .../knowledge/architecture/assessment.md | 4 +- .../architecture/quality-attributes.md | 7 +- .../knowledge/architecture/reconciliation.md | 2 +- .../domain-modeling/domain-modeling.md | 77 +++++++ .../domain-modeling/event-storming.md | 109 +++++++--- .../requirements/feature-boundaries.md | 71 +++++++ .../requirements/feature-discovery.md | 44 +++- .../knowledge/requirements/rule-derivation.md | 93 ++++++++ .../requirements/ubiquitous-language.md | 64 ++++-- .../knowledge/skill-design/principles.md | 4 +- .opencode/skills/accept-feature/SKILL.md | 2 +- .opencode/skills/analyze-root-cause/SKILL.md | 2 +- .opencode/skills/assess-architecture/SKILL.md | 2 +- .opencode/skills/break-down-feature/SKILL.md | 4 +- .../skills/commit-implementation/SKILL.md | 2 +- .opencode/skills/conduct-interview/SKILL.md | 2 +- .opencode/skills/confirm-baseline/SKILL.md | 2 +- .opencode/skills/create-pr/SKILL.md | 2 +- .opencode/skills/create-py-stubs/SKILL.md | 2 +- .opencode/skills/decide-batch-action/SKILL.md | 2 +- .opencode/skills/define-done/SKILL.md | 2 +- .../skills/define-product-scope/SKILL.md | 2 +- .../define-ubiquitous-language/SKILL.md | 15 +- .opencode/skills/design-assets/SKILL.md | 2 +- .opencode/skills/design-colors/SKILL.md | 2 +- .../skills/design-technical-solution/SKILL.md | 17 +- .../skills/determine-action-items/SKILL.md | 2 +- .opencode/skills/discover-features/SKILL.md | 26 +-- .opencode/skills/discover-rules/SKILL.md | 17 ++ .../discover-rules/discover-rules/SKILL.md | 17 ++ .../skills/document-post-mortem/SKILL.md | 2 +- .opencode/skills/domain-discovery/SKILL.md | 18 ++ .opencode/skills/draft-adr/SKILL.md | 5 +- .opencode/skills/extract-lessons/SKILL.md | 2 +- .../skills/facilitate-event-storming/SKILL.md | 18 +- .opencode/skills/implement-minimum/SKILL.md | 4 +- .opencode/skills/map-contexts/SKILL.md | 9 +- .opencode/skills/merge-local/SKILL.md | 2 +- .opencode/skills/model-domain/SKILL.md | 2 +- .opencode/skills/refactor/SKILL.md | 4 +- .opencode/skills/review-architecture/SKILL.md | 4 +- .opencode/skills/review-conventions/SKILL.md | 2 +- .opencode/skills/review-design/SKILL.md | 2 +- .opencode/skills/review-structure/SKILL.md | 2 +- .opencode/skills/select-feature/SKILL.md | 4 +- .opencode/skills/setup-apply/SKILL.md | 2 +- .opencode/skills/setup-assess/SKILL.md | 2 +- .opencode/skills/setup-branding/SKILL.md | 2 +- .opencode/skills/setup-configure/SKILL.md | 2 +- .opencode/skills/setup-verify/SKILL.md | 2 +- .opencode/skills/structure-project/SKILL.md | 2 +- .opencode/skills/verify-traceability/SKILL.md | 2 +- .opencode/skills/write-bdd-features/SKILL.md | 2 +- .opencode/skills/write-test/SKILL.md | 4 +- ...late => ADR_YYYYMMDD_.md.template} | 2 +- ...e => IN_YYYYMMDD_.md.template} | 4 +- ...mplate => PM_YYYYMMDD_.md.template} | 2 +- .templates/docs/spec/context_map.md.template | 47 ---- .templates/docs/spec/domain_model.md.template | 41 +++- .../docs/spec/event_storming.md.template | 47 ---- .../docs/spec/product_definition.md.template | 20 ++ .../docs/spec/technical_design.md.template | 151 ------------- ...py.template => _test.py.template} | 2 +- AGENTS.md | 37 +++- 86 files changed, 747 insertions(+), 763 deletions(-) create mode 100644 .flowr/flows/architecture-flow.mermaid create mode 100644 .flowr/flows/branding-flow.mermaid create mode 100644 .flowr/flows/delivery-flow.mermaid create mode 100644 .flowr/flows/development-flow.mermaid create mode 100644 .flowr/flows/discovery-flow.mermaid delete mode 100644 .flowr/flows/document-dependencies.yaml create mode 100644 .flowr/flows/feature-development-flow.mermaid create mode 100644 .flowr/flows/main-flow.mermaid create mode 100644 .flowr/flows/planning-flow.mermaid create mode 100644 .flowr/flows/post-mortem-flow.mermaid create mode 100644 .flowr/flows/review-gate-flow.mermaid create mode 100644 .flowr/flows/setup-project-flow.mermaid create mode 100644 .flowr/flows/tdd-cycle-flow.mermaid create mode 100644 .opencode/knowledge/domain-modeling/domain-modeling.md create mode 100644 .opencode/knowledge/requirements/feature-boundaries.md create mode 100644 .opencode/knowledge/requirements/rule-derivation.md create mode 100644 .opencode/skills/discover-rules/SKILL.md create mode 100644 .opencode/skills/discover-rules/discover-rules/SKILL.md create mode 100644 .opencode/skills/domain-discovery/SKILL.md rename .templates/docs/adr/{ADR_YYYYMMDD_.md.template => ADR_YYYYMMDD_.md.template} (97%) rename .templates/docs/interview-notes/{IN_YYYYMMDD_.md.template => IN_YYYYMMDD_.md.template} (95%) rename .templates/docs/post-mortem/{PM_YYYYMMDD_.md.template => PM_YYYYMMDD_.md.template} (87%) delete mode 100644 .templates/docs/spec/context_map.md.template delete mode 100644 .templates/docs/spec/event_storming.md.template delete mode 100644 .templates/docs/spec/technical_design.md.template rename .templates/tests/features/{_test.py.template => _test.py.template} (82%) diff --git a/.flowr/flows/architecture-flow.mermaid b/.flowr/flows/architecture-flow.mermaid new file mode 100644 index 00000000..6adff076 --- /dev/null +++ b/.flowr/flows/architecture-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"architecture-assessment\" as architecture-assessment\n state \"context-mapping\" as context-mapping\n state \"technical-design\" as technical-design\n state \"review-signoff\" as review-signoff\n architecture-assessment --> complete : no_architecture_needed | architecture_complete: ==verified\n architecture-assessment --> context-mapping : needs_context_update | domain_model_md: ==exists\n architecture-assessment --> technical-design : needs_technical_design | domain_model_md: ==exists\n architecture-assessment --> context-mapping : greenfield | domain_model_md: ==missing\n architecture-assessment --> needs_discovery : delivery_mismatch_unresolvable\n architecture-assessment --> needs_discovery : needs_discovery\n context-mapping --> technical-design : done\n context-mapping --> needs_discovery : needs_discovery\n technical-design --> review-signoff : done\n review-signoff --> complete : approved | alignment: ==domain_model_verified, adr_compliance: ==adrs_respected, committed_to_main_locally: ==verified\n review-signoff --> architecture-assessment : inconsistent\n review-signoff --> needs_discovery : needs_discovery" +} diff --git a/.flowr/flows/architecture-flow.yaml b/.flowr/flows/architecture-flow.yaml index 2058d41c..efb6cda5 100644 --- a/.flowr/flows/architecture-flow.yaml +++ b/.flowr/flows/architecture-flow.yaml @@ -1,5 +1,5 @@ flow: architecture-flow -version: 6.0.0 +version: 8.0.0 exits: - complete - needs_discovery @@ -15,23 +15,17 @@ states: in: - product_definition.md - domain_model.md - - "technical_design.md" # optional - - "context_map.md" # optional out: - product_definition.md: - deployment - quality_attributes conditions: architecture_complete: - technical_design_md: ==exists - context_map_md: ==exists - deployment_matches_codebase: ==verified + architecture_complete: ==verified architecture_exists: - technical_design_md: ==exists - context_map_md: ==exists + domain_model_md: ==exists no_architecture_exists: - technical_design_md: ==false - context_map_md: ==false + domain_model_md: ==missing next: no_architecture_needed: to: complete @@ -50,7 +44,7 @@ states: - id: context-mapping attrs: - description: "SA maps bounded context relationships, integration points, and anti-corruption layers" + description: "SA maps bounded context relationships, Vernon patterns, and anti-corruption layers into domain_model.md" owner: SA git: main skills: @@ -60,62 +54,30 @@ states: - product_definition.md - glossary.md out: - - context_map.md: - - context_relationships - - context_map_diagram - - integration_points - - anti_corruption_layers + - domain_model.md: + - context_map + - changes next: done: technical-design needs_discovery: needs_discovery - id: technical-design attrs: - description: "SA designs the technical solution — architectural style, stack, module structure, API/event contracts, interface definitions" + description: "SA selects technology stack, documents dependencies, and drafts ADRs for architectural decisions" owner: SA git: main skills: - design-technical-solution + - draft-adr in: - - context_map.md - domain_model.md - glossary.md - product_definition.md - - "technical_design.md" # optional out: - - technical_design.md: - - architectural_style - - quality_attributes - - stack - - module_structure - - api_contracts - - event_contracts - - interface_definitions - - c4_diagrams + - product_definition.md: + - technology_stack - dependencies - - configuration_keys - next: - done: review-signoff - needs_decisions: adr-draft - - - id: adr-draft - attrs: - description: "SA documents architecturally significant decisions as ADRs and records key decisions and active constraints in technical_design.md" - owner: SA - git: main - skills: - - draft-adr - in: - - technical_design.md - - context_map.md - - domain_model.md - - product_definition.md - - glossary.md - out: - - technical_design.md: - - key_decisions - - active_constraints - - adr/.md + - adr/.md next: done: review-signoff @@ -127,9 +89,7 @@ states: skills: - review-architecture in: - - context_map.md - - technical_design.md - - "adr/*.md" # optional + - "adr/.md" # optional - product_definition.md - domain_model.md - glossary.md diff --git a/.flowr/flows/branding-flow.mermaid b/.flowr/flows/branding-flow.mermaid new file mode 100644 index 00000000..1e320244 --- /dev/null +++ b/.flowr/flows/branding-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"setup-branding\" as setup-branding\n state \"design-colors\" as design-colors\n state \"design-assets\" as design-assets\n setup-branding --> design-colors : confirmed\n setup-branding --> cancelled : cancelled\n design-colors --> design-assets : approved\n design-colors --> design-colors : revise\n design-colors --> cancelled : cancelled\n design-assets --> branded : approved | logo_monochrome: ==passes, logo_scalability: ==passes, logo_blur_test: ==passes, committed_to_main_locally: ==verified\n design-assets --> design-assets : revise\n design-assets --> cancelled : cancelled" +} diff --git a/.flowr/flows/delivery-flow.mermaid b/.flowr/flows/delivery-flow.mermaid new file mode 100644 index 00000000..5d69634c --- /dev/null +++ b/.flowr/flows/delivery-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"acceptance\" as acceptance\n state \"local-merge\" as local-merge\n state \"publish-decision\" as publish-decision\n state \"pr-creation\" as pr-creation\n acceptance --> local-merge : approved | feature_status: ==ACCEPTED\n acceptance --> rejected : rejected\n local-merge --> publish-decision : merged\n local-merge --> needs_development : conflict\n publish-decision --> next-feature : accumulate\n publish-decision --> pr-creation : publish\n pr-creation --> next-feature : approved | ci_passes: ==verified, no_changes_requested: ==verified\n pr-creation --> needs_development : changes_requested\n pr-creation --> cancelled : cancelled" +} diff --git a/.flowr/flows/delivery-flow.yaml b/.flowr/flows/delivery-flow.yaml index fcbe68f4..cf1df3f5 100644 --- a/.flowr/flows/delivery-flow.yaml +++ b/.flowr/flows/delivery-flow.yaml @@ -1,6 +1,6 @@ flow: delivery-flow version: 5.0.0 -params: [feature_name] +params: [feature_id] exits: - next-feature @@ -18,8 +18,9 @@ states: - accept-feature - verify-traceability in: - - features/.feature + - features/.feature - product_definition.md + - glossary.md out: - acceptance_evidence - approval_record @@ -42,7 +43,7 @@ states: in: - feature_commits - approval_record - - features/.feature + - features/.feature out: - merged_commits next: @@ -72,7 +73,7 @@ states: - create-pr in: - merged_commits - - features/.feature + - features/.feature out: [] conditions: merged: diff --git a/.flowr/flows/development-flow.mermaid b/.flowr/flows/development-flow.mermaid new file mode 100644 index 00000000..e07c02f7 --- /dev/null +++ b/.flowr/flows/development-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"project-structuring\" as project-structuring\n tdd-cycle --> tdd-cycle-flow\n note right of tdd-cycle: invokes tdd-cycle-flow\n state \"tdd-cycle\" as tdd-cycle\n review-gate --> review-gate-flow\n note right of review-gate: invokes review-gate-flow\n state \"review-gate\" as review-gate\n state \"commit\" as commit\n project-structuring --> tdd-cycle : ready\n project-structuring --> needs_planning : needs_planning\n tdd-cycle --> review-gate : all_green | yagni: ==no_premature_abstractions, kiss: ==simplest_solution, dry: ==no_duplicated_logic, objcal: ==calisthenics_followed, smells: ==all_smells_addressed, solid: ==principles_applied, patterns: ==patterns_justified\n tdd-cycle --> project-structuring : blocked\n review-gate --> commit : pass\n review-gate --> tdd-cycle : fail\n commit --> done : done" +} diff --git a/.flowr/flows/development-flow.yaml b/.flowr/flows/development-flow.yaml index 375bd44c..251c2f1b 100644 --- a/.flowr/flows/development-flow.yaml +++ b/.flowr/flows/development-flow.yaml @@ -1,6 +1,6 @@ flow: development-flow -version: 6.0.0 -params: [feature_name] +version: 8.0.0 +params: [feature_id] exits: - done - needs_planning @@ -14,11 +14,9 @@ states: skills: - structure-project in: - - features/.feature - - technical_design.md + - features/.feature - domain_model.md - glossary.md - - context_map.md - product_definition.md out: - git_branch @@ -31,7 +29,7 @@ states: description: "SE implements the feature through repeated RED-GREEN-REFACTOR cycles until all BDD examples pass" git: feature flow: tdd-cycle-flow - flow-version: "^3" + flow-version: "^4" conditions: design_declared: yagni: ==no_premature_abstractions @@ -52,7 +50,7 @@ states: description: "R independently verifies implementation across three tiers — design, structure, and conventions — before commit" git: feature flow: review-gate-flow - flow-version: "^4" + flow-version: "^6" next: pass: commit fail: tdd-cycle @@ -70,7 +68,7 @@ states: - design_review_evidence - structure_review_evidence - conventions_review_evidence - - features/.feature + - features/.feature out: - feature_commits next: diff --git a/.flowr/flows/discovery-flow.mermaid b/.flowr/flows/discovery-flow.mermaid new file mode 100644 index 00000000..32dde4a8 --- /dev/null +++ b/.flowr/flows/discovery-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"stakeholder-interview\" as stakeholder-interview\n state \"domain-discovery\" as domain-discovery\n state \"scope-boundary\" as scope-boundary\n state \"feature-discovery\" as feature-discovery\n stakeholder-interview --> domain-discovery : needs_full_discovery\n stakeholder-interview --> scope-boundary : needs_scope_only\n stakeholder-interview --> complete : already_known\n domain-discovery --> scope-boundary : done\n domain-discovery --> stakeholder-interview : needs_reinterview\n scope-boundary --> feature-discovery : done\n scope-boundary --> stakeholder-interview : needs_reinterview\n feature-discovery --> complete : done | committed_to_main_locally: ==verified\n feature-discovery --> stakeholder-interview : needs_reinterview" +} diff --git a/.flowr/flows/discovery-flow.yaml b/.flowr/flows/discovery-flow.yaml index c6791474..7087e3ed 100644 --- a/.flowr/flows/discovery-flow.yaml +++ b/.flowr/flows/discovery-flow.yaml @@ -1,5 +1,5 @@ flow: discovery-flow -version: 6.0.0 +version: 11.0.0 exits: - complete @@ -13,73 +13,41 @@ states: - conduct-interview in: - "interview-notes/*.md" # optional + - "domain_model.md" # optional — for re-interview context + - "product_definition.md" # optional — for re-interview context out: - - interview-notes/.md: - - pain_points - - business_goals - - terms_to_define - - quality_attributes + - "interview-notes/.md" next: - needs_full_discovery: event-storming + needs_full_discovery: domain-discovery needs_scope_only: scope-boundary already_known: complete - - id: event-storming + - id: domain-discovery attrs: - description: "DE facilitates an event storming workshop to surface domain events, commands, and aggregate candidates" + description: "DE facilitates event storming and co-emergent ubiquitous language definition, producing domain_model.md and glossary.md as paired outputs" owner: DE git: main skills: - facilitate-event-storming - in: - - interview-notes/*.md - out: - - event_storming.md: - - event_map - - context_candidates - - aggregate_candidates - next: - done: language-definition - needs_reinterview: stakeholder-interview - - - id: language-definition - attrs: - description: "DE formalizes the ubiquitous language by defining domain terms into a glossary" - owner: DE - git: main - skills: + - domain-discovery - define-ubiquitous-language in: - interview-notes/*.md - - event_storming.md - - "domain_model.md" # optional - out: - - glossary.md - next: - done: domain-modeling - needs_restorming: event-storming - - - id: domain-modeling - attrs: - description: "DE formalizes candidates into proper bounded contexts, entities, relationships, and aggregate boundaries" - owner: DE - git: main - skills: - - model-domain - in: - - glossary.md - - event_storming.md + - "domain_model.md" # optional — cumulative edit across iterations + - "glossary.md" # optional — cumulative edit across iterations out: - domain_model.md: + - summary - bounded_contexts + - events_and_commands - entities - relationships - aggregate_boundaries - - summary + - context_map + - changes - glossary.md next: done: scope-boundary - contradiction_found: language-definition needs_reinterview: stakeholder-interview - id: scope-boundary @@ -108,18 +76,19 @@ states: - id: feature-discovery attrs: - description: "PO synthesizes analysis artifacts into coherent feature boundaries with scoped business rules and constraints" + description: "PO identifies feature boundaries from the delivery order, then derives business rules and constraints from domain model artifacts" owner: PO git: main skills: - discover-features + - discover-rules in: - product_definition.md - domain_model.md - glossary.md - - "technical_design.md" # optional + - "features/*.feature" # optional — existing features for cumulative edit out: - - features/.feature: + - features/.feature: - title - description - rules_business diff --git a/.flowr/flows/document-dependencies.yaml b/.flowr/flows/document-dependencies.yaml deleted file mode 100644 index f0c962c2..00000000 --- a/.flowr/flows/document-dependencies.yaml +++ /dev/null @@ -1,201 +0,0 @@ -flow: document-dependencies -version: 6.0.0 -exits: [standalone] - -states: - - id: interview-notes - attrs: - description: "Raw stakeholder research — pain points, goals, terms, quality attributes" - role: raw-source - audience: business - produced_by: "discovery → stakeholder-interview" - observation: "Only read by synthesis states (domain-model, glossary). Downstream states read synthesized documents instead of raw notes." - sections: - - "General (Q&A)" - - "Feature: (Q&A)" - - "Quality Attributes" - - "Pain Points Identified" - - "Business Goals Identified" - - "Terms to Define" - - "Action Items" - next: - standalone: standalone - - - id: domain-model - attrs: - description: "The WHAT — bounded contexts, entities, relationships, aggregate boundaries, and the WHY for each" - role: specification - audience: "business, architect, developer" - produced_by: "discovery → domain-modeling" - observation: "Absorbed system.md's rationale columns. Why Separate and Why Grouped capture architectural reasoning that previously lived in a separate document. Event storming intermediate sections live in event-storming.md — not here." - sections: - - "Summary" - - "Bounded Contexts (Context, Responsibility, Key Entities, Integration Points, Why Separate)" - - "Entities" - - "Relationships" - - "Aggregate Boundaries (Aggregate, Root Entity, Invariants, Bounded Context, Why Grouped)" - - "Changes" - next: - "← Event Map, Context Candidates, Aggregate Candidates": event-storming - - - id: event-storming - attrs: - description: "Workshop output — domain events, commands, read models, context/aggregate candidates" - role: raw-source - audience: "business, architect" - produced_by: "discovery → event-storming" - observation: "Ephemeral workshop file kept as separate document. Not part of domain_model.md. Preserved in git for traceability but not consumed downstream after domain-modeling completes." - sections: - - "Domain Events" - - "Commands" - - "Read Models" - - "Context Candidates" - - "Aggregate Candidates" - next: - standalone: standalone - - - id: glossary - attrs: - description: "Shared language — term definitions, aliases, examples. Universally read for naming consistency." - role: specification - audience: "business, architect, developer" - produced_by: "discovery → language-definition" - sections: - - "Term entries (append-only: Term, Definition, Aliases, Example, Source)" - next: - "← Terms to Define": interview-notes - "← Bounded Contexts, Entities, Context Candidates": domain-model - "← Domain Events, Commands": event-storming - - - id: product-definition - attrs: - description: "The WHY & WHO — scope, users, delivery order, quality attributes, DoD" - role: specification - audience: "business, architect, developer" - produced_by: "discovery → scope-boundary, architecture → architecture-assessment, planning → definition-of-done" - sections: - - "What IS / What IS NOT" - - "Why" - - "Users" - - "Quality Attributes" - - "Out of Scope" - - "Delivery Order" - - "Project Conventions (DoD, Deployment, Branch Strategy)" - - "Scope Changes" - next: - "← Summary, Bounded Contexts → What IS, Users, Delivery Order": domain-model - - - id: context-map - attrs: - description: "DDD strategic design — context relationships, integration patterns, ACLs" - role: specification - audience: architect - produced_by: "architecture → context-mapping" - sections: - - "Context Relationships" - - "Context Map Diagram" - - "Integration Points" - - "Anti-Corruption Layers" - - "Changes" - next: - "← Bounded Contexts, Integration Points → Context Relationships, Integration, ACLs": domain-model - "← Quality Attributes, Deployment → integration patterns (sync/async)": product-definition - - - id: technical-design - attrs: - description: "The HOW — architecture, stack, API/event contracts, interfaces, modules, constraints, key decisions" - role: specification - audience: "architect, developer" - produced_by: "architecture → technical-design" - observation: "Absorbed system.md's ADR summaries (Active Constraints, Key Decisions). Now the single architect reference — no separate summary document needed." - sections: - - "Feature" - - "Architectural Style" - - "Quality Attributes" - - "Stack" - - "Module Structure" - - "API Contracts" - - "Event Contracts" - - "Interface Definitions" - - "C4 Diagrams" - - "Dependencies" - - "Configuration Keys" - - "Active Constraints [from ADRs]" - - "Key Decisions [from ADRs]" - - "Changes" - next: - "← Context Relationships, Integration Points, ACLs → API/Event Contracts, Interfaces": context-map - "← Bounded Contexts, Entities, Relationships → Module Structure, API Contracts": domain-model - "← Quality Attributes, Deployment → Stack, Dependencies, QA mapping": product-definition - - - id: adr - attrs: - description: "Decision records — context, alternatives, rationale, consequences, risk" - role: specification - audience: "architect, developer" - produced_by: "architecture → adr-draft" - observation: "Read by review-signoff and design-review (optional). All other states read technical-design's Active Constraints + Key Decisions instead of raw ADRs." - sections: - - "Status" - - "Context" - - "Interview" - - "Decision" - - "Reason" - - "Alternatives Considered" - - "Consequences" - - "Risk Assessment" - next: - "← Architectural Style, Stack, Module Structure, QA → Context, Interview, Risk": technical-design - "← Context Relationships → Context": context-map - "← Bounded Contexts, Aggregate Boundaries → Context": domain-model - "← Quality Attributes, Scope → Context, Risk": product-definition - - - id: features - attrs: - description: "The BEHAVIOR — per-feature rules, BDD examples, constraints" - role: specification - audience: "business, developer" - produced_by: "discovery → feature-discovery, planning → feature-breakdown, planning → feature-examples" - sections: - - "Title" - - "Description" - - "Rules (Business) [superseded by Rules]" - - "Constraints" - - "Questions" - - "Rules (refined, INVEST-validated)" - - "Examples (Given/When/Then with @id tags)" - - "Changes" - next: - "← Delivery Order, Quality Attributes, What IS/IS NOT → priority, Constraints, scope": product-definition - "← Entities, Relationships, Aggregate Boundaries → feature scope, boundaries": domain-model - "← API Contracts, Interface Definitions → feature scope": technical-design - "← Integration Points, ACLs → project skeleton structure": context-map - - - id: branding - attrs: - description: "Brand identity — personality, visual, wording, release naming" - role: specification - audience: business - produced_by: "branding → setup-branding, branding → design-colors" - sections: - - "Identity" - - "Visual (Logo, Banner)" - - "Release Naming" - - "Wording" - next: - standalone: standalone - - - id: post-mortem - attrs: - description: "Failure records — root cause, missed gate, fix, restart check" - role: operational - audience: developer - produced_by: "post-mortem → document-findings, extract-lessons, action-items" - sections: - - "Failed At" - - "Root Cause" - - "Missed Gate" - - "Fix" - - "Restart Check" - next: - standalone: standalone diff --git a/.flowr/flows/feature-development-flow.mermaid b/.flowr/flows/feature-development-flow.mermaid new file mode 100644 index 00000000..0c8dc9b3 --- /dev/null +++ b/.flowr/flows/feature-development-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n planning --> planning-flow\n note right of planning: invokes planning-flow\n state \"planning\" as planning\n development --> development-flow\n note right of development: invokes development-flow\n state \"development\" as development\n delivery --> delivery-flow\n note right of delivery: invokes delivery-flow\n state \"delivery\" as delivery\n post-mortem --> post-mortem-flow\n note right of post-mortem: invokes post-mortem-flow\n state \"post-mortem\" as post-mortem\n planning --> development : complete\n planning --> needs_architecture : needs_architecture\n planning --> completed : no_features\n development --> delivery : done\n development --> planning : needs_planning\n delivery --> planning : next-feature\n delivery --> post-mortem : rejected\n delivery --> development : needs_development\n delivery --> cancelled : cancelled\n post-mortem --> planning : complete\n post-mortem --> needs_architecture : needs_architecture\n post-mortem --> cancelled : no_action" +} diff --git a/.flowr/flows/feature-development-flow.yaml b/.flowr/flows/feature-development-flow.yaml index 0b1f8ec4..c0013bdc 100644 --- a/.flowr/flows/feature-development-flow.yaml +++ b/.flowr/flows/feature-development-flow.yaml @@ -1,6 +1,6 @@ flow: feature-development-flow version: 7.0.0 -params: [feature_name] +params: [feature_id] exits: - needs_architecture diff --git a/.flowr/flows/main-flow.mermaid b/.flowr/flows/main-flow.mermaid new file mode 100644 index 00000000..6bcb0b8c --- /dev/null +++ b/.flowr/flows/main-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n discovery --> discovery-flow\n note right of discovery: invokes discovery-flow\n state \"discovery\" as discovery\n architecture --> architecture-flow\n note right of architecture: invokes architecture-flow\n state \"architecture\" as architecture\n feature-development --> feature-development-flow\n note right of feature-development: invokes feature-development-flow\n state \"feature-development\" as feature-development\n discovery --> architecture : complete\n architecture --> feature-development : complete\n architecture --> discovery : needs_discovery\n feature-development --> architecture : needs_architecture\n feature-development --> cancelled : cancelled\n feature-development --> completed : completed" +} diff --git a/.flowr/flows/planning-flow.mermaid b/.flowr/flows/planning-flow.mermaid new file mode 100644 index 00000000..2be9482a --- /dev/null +++ b/.flowr/flows/planning-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"feature-selection\" as feature-selection\n state \"feature-breakdown\" as feature-breakdown\n state \"feature-examples\" as feature-examples\n state \"create-py-stubs\" as create-py-stubs\n state \"definition-of-done\" as definition-of-done\n state \"ready\" as ready\n feature-selection --> feature-breakdown : selected\n feature-selection --> needs_architecture : needs_architecture\n feature-selection --> no_features : no_features\n feature-breakdown --> feature-examples : done | independent: ==no_shared_data_or_side_effects, negotiable: ==scope_negotiated, valuable: ==user_value_clear, estimable: ==effort_estimated, small: ==fits_single_sprint, testable: ==acceptance_criteria_defined\n feature-breakdown --> feature-breakdown : needs_respecification\n feature-examples --> create-py-stubs : done | all_examples_have_ids: ==verified, all_examples_have_gherkin: ==verified, premortem_done: ==verified, concerns: <=2, must_examples: <=8, all_examples_observable: ==each_then_describes_single_outcome, all_examples_declarative: ==behaviour_not_ui_steps, distinctness_verified: ==no_duplicate_observable_behaviours\n feature-examples --> feature-breakdown : needs_respecification\n create-py-stubs --> definition-of-done : done\n definition-of-done --> ready : done\n ready --> complete : done | feature_status: ==BASELINED, committed_to_main_locally: ==verified" +} diff --git a/.flowr/flows/planning-flow.yaml b/.flowr/flows/planning-flow.yaml index 0e210edc..b0669870 100644 --- a/.flowr/flows/planning-flow.yaml +++ b/.flowr/flows/planning-flow.yaml @@ -1,6 +1,6 @@ flow: planning-flow -version: 7.0.0 -params: [feature_name] +version: 9.0.0 +params: [feature_id] exits: - complete - needs_architecture @@ -16,7 +16,8 @@ states: - select-feature in: - product_definition.md - - technical_design.md + - domain_model.md + - "features/.feature" # optional — discover available features out: [] next: selected: feature-breakdown @@ -31,11 +32,12 @@ states: skills: - break-down-feature in: - - features/.feature + - features/.feature - product_definition.md - - technical_design.md + - domain_model.md + - glossary.md out: - - features/.feature: + - features/.feature: - rules conditions: invest_passed: @@ -59,12 +61,12 @@ states: skills: - write-bdd-features in: - - features/.feature + - features/.feature - product_definition.md - domain_model.md - glossary.md out: - - features/.feature: + - features/.feature: - examples conditions: examples_have_ids: @@ -99,8 +101,7 @@ states: skills: - create-py-stubs in: - - features/.feature - - technical_design.md + - features/.feature - domain_model.md - glossary.md out: @@ -120,7 +121,7 @@ states: skills: - define-done in: - - features/.feature + - features/.feature - product_definition.md out: - product_definition.md: @@ -136,8 +137,9 @@ states: skills: - confirm-baseline in: - - features/.feature + - features/.feature - product_definition.md + - domain_model.md out: [] conditions: feature_baselined: diff --git a/.flowr/flows/post-mortem-flow.mermaid b/.flowr/flows/post-mortem-flow.mermaid new file mode 100644 index 00000000..438fcfeb --- /dev/null +++ b/.flowr/flows/post-mortem-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"root-cause-analysis\" as root-cause-analysis\n state \"document-findings\" as document-findings\n state \"extract-lessons\" as extract-lessons\n state \"action-items\" as action-items\n root-cause-analysis --> document-findings : issues_found\n root-cause-analysis --> no_action : no_issues_found\n document-findings --> extract-lessons : done\n extract-lessons --> action-items : done\n action-items --> complete : replan\n action-items --> needs_architecture : architecture_issue\n action-items --> no_action : abandon" +} diff --git a/.flowr/flows/post-mortem-flow.yaml b/.flowr/flows/post-mortem-flow.yaml index 562413cd..4b2e4275 100644 --- a/.flowr/flows/post-mortem-flow.yaml +++ b/.flowr/flows/post-mortem-flow.yaml @@ -30,7 +30,7 @@ states: in: - root_cause_analysis out: - - post-mortem/PM_YYYYMMDD_.md: + - post-mortem/PM_YYYYMMDD_.md: - failed_at - root_cause - missed_gate @@ -45,9 +45,9 @@ states: skills: - extract-lessons in: - - post-mortem/PM_YYYYMMDD_.md + - post-mortem/PM_YYYYMMDD_.md out: - - post-mortem/PM_YYYYMMDD_.md: + - post-mortem/PM_YYYYMMDD_.md: - fix next: done: action-items @@ -60,9 +60,9 @@ states: skills: - determine-action-items in: - - post-mortem/PM_YYYYMMDD_.md + - post-mortem/PM_YYYYMMDD_.md out: - - post-mortem/PM_YYYYMMDD_.md: + - post-mortem/PM_YYYYMMDD_.md: - restart_check next: replan: complete diff --git a/.flowr/flows/review-gate-flow.mermaid b/.flowr/flows/review-gate-flow.mermaid new file mode 100644 index 00000000..e5bcace7 --- /dev/null +++ b/.flowr/flows/review-gate-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"design-review\" as design-review\n state \"structure-review\" as structure-review\n state \"conventions-review\" as conventions-review\n design-review --> structure-review : pass | alignment: ==domain_model_verified, adr_compliance: ==adrs_respected\n design-review --> fail : fail\n structure-review --> conventions-review : pass | coverage: ==threshold_met, traceability: ==all_ids_covered, coupling: ==behavior_not_implementation\n structure-review --> fail : fail\n conventions-review --> pass : pass | formatting: ==clean, naming: ==domain_language\n conventions-review --> fail : fail" +} diff --git a/.flowr/flows/review-gate-flow.yaml b/.flowr/flows/review-gate-flow.yaml index 2d5dfe24..2b13e341 100644 --- a/.flowr/flows/review-gate-flow.yaml +++ b/.flowr/flows/review-gate-flow.yaml @@ -1,6 +1,6 @@ flow: review-gate-flow -version: 4.0.0 -params: [feature_name] +version: 6.0.0 +params: [feature_id] exits: - pass - fail @@ -14,12 +14,10 @@ states: skills: - review-design in: + - features/.feature - domain_model.md - glossary.md - - technical_design.md - - context_map.md - product_definition.md - - "adr/*.md" # optional - refactored_source out: - design_review_evidence @@ -45,7 +43,7 @@ states: - coverage_reports - test_output - refactored_source - - features/.feature + - features/.feature - domain_model.md - glossary.md out: @@ -83,4 +81,4 @@ states: pass: to: pass when: conventions_approved - fail: fail \ No newline at end of file + fail: fail diff --git a/.flowr/flows/setup-project-flow.mermaid b/.flowr/flows/setup-project-flow.mermaid new file mode 100644 index 00000000..d2434bf5 --- /dev/null +++ b/.flowr/flows/setup-project-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"assess-requirements\" as assess-requirements\n state \"configure-parameters\" as configure-parameters\n state \"apply-substitutions\" as apply-substitutions\n state \"verify-and-finalize\" as verify-and-finalize\n assess-requirements --> configure-parameters : assessed\n assess-requirements --> cancelled : cancelled\n configure-parameters --> apply-substitutions : confirmed | pyproject_toml: ==exists, readme_md: ==exists, github_workflows_ci_yml: ==exists, license: ==exists, tests_unit_main_test_py: ==exists, app_directory: ==exists\n configure-parameters --> cancelled : missing_files\n apply-substitutions --> verify-and-finalize : applied | no_stale_app_imports: ==verified, package_renamed: ==verified, version_reset: ==verified\n apply-substitutions --> cancelled : failed\n verify-and-finalize --> initialized : initialized | tests_pass: ==verified, imports_valid: ==verified, artifacts_cleaned: ==verified, committed_to_main_locally: ==verified\n verify-and-finalize --> cancelled : failed" +} diff --git a/.flowr/flows/tdd-cycle-flow.mermaid b/.flowr/flows/tdd-cycle-flow.mermaid new file mode 100644 index 00000000..268a83e4 --- /dev/null +++ b/.flowr/flows/tdd-cycle-flow.mermaid @@ -0,0 +1,3 @@ +{ + "mermaid": "stateDiagram-v2\n state \"red\" as red\n state \"green\" as green\n state \"refactor\" as refactor\n red --> green : test_written\n red --> blocked : blocked\n green --> refactor : test_passes\n refactor --> red : next_example\n refactor --> all_green : all_examples_pass" +} diff --git a/.flowr/flows/tdd-cycle-flow.yaml b/.flowr/flows/tdd-cycle-flow.yaml index c31f63e2..f5e60574 100644 --- a/.flowr/flows/tdd-cycle-flow.yaml +++ b/.flowr/flows/tdd-cycle-flow.yaml @@ -1,6 +1,6 @@ flow: tdd-cycle-flow -version: 3.0.0 -params: [feature_name] +version: 4.0.0 +params: [feature_id] exits: - all_green - blocked @@ -16,6 +16,9 @@ states: in: - test_skeletons - typed_source_stubs + - features/.feature + - domain_model.md + - glossary.md out: - test_implementations next: @@ -32,6 +35,9 @@ states: in: - test_implementations - typed_source_stubs + - features/.feature + - domain_model.md + - glossary.md out: - source_implementations next: @@ -47,6 +53,9 @@ states: in: - source_implementations - test_implementations + - features/.feature + - domain_model.md + - glossary.md out: - source_implementations - refactored_source diff --git a/.opencode/knowledge/architecture/assessment.md b/.opencode/knowledge/architecture/assessment.md index 6dc6dede..3a10afa4 100644 --- a/.opencode/knowledge/architecture/assessment.md +++ b/.opencode/knowledge/architecture/assessment.md @@ -9,7 +9,7 @@ last-updated: 2026-04-29 ## Key Takeaways - Delivery mechanism is the boundary between the domain and the outside world (Cockburn, 2005): HTTP, CLI, message queue, etc. It must be verified against the product definition before designing anything. -- Architecture exists when technical_design.md and context_map.md both contain meaningful content aligned with the current domain. +- Architecture exists when domain_model.md and product_definition.md both contain meaningful content aligned with the current domain. - If architecture exists but delivery mechanism mismatches, record it as an ADR before proceeding. - Hexagonal architecture (Ports & Adapters, Cockburn, 2005) keeps the domain independent of delivery mechanism. Verify this is followed. - SA conducts an assessment interview to verify and correct quality attributes, deployment constraints, and hidden requirements before routing. @@ -18,7 +18,7 @@ last-updated: 2026-04-29 **Delivery Mechanism Verification**: Before designing a feature, the architect must verify that the delivery mechanism stated in the product definition (e.g., "web application", "CLI tool", "API service") matches the actual codebase implementation. A mismatch (e.g., product says "web" but codebase is CLI) must be recorded as an ADR and resolved before proceeding. This checkpoint prevents building on a foundation that doesn't match the product's intent. -**Architecture Existence Check**: Architecture is considered to exist when two documents contain meaningful, aligned content: technical_design.md (technical decisions, active constraints, key decisions) and context_map.md (bounded context relationships). Empty or placeholder content does not count. If both exist and are coherent, the architect evaluates whether the existing architecture covers the new feature or needs updating. +**Architecture Existence Check**: Architecture is considered to exist when two documents contain meaningful, aligned content: domain_model.md (bounded contexts, entities, relationships, context map) and product_definition.md (quality attributes, technology stack, dependencies). Empty or placeholder content does not count. If both exist and are coherent, the architect evaluates whether the existing architecture covers the new feature or needs updating. **Hexagonal Architecture (Ports & Adapters, Cockburn, 2005)**: The domain core must not depend on infrastructure. Ports define what the domain needs; adapters provide concrete implementations. When reviewing architecture, verify that external dependencies (databases, frameworks, APIs) are behind Protocol interfaces, not directly referenced in domain code. diff --git a/.opencode/knowledge/architecture/quality-attributes.md b/.opencode/knowledge/architecture/quality-attributes.md index 3c9cebbb..24bbbd76 100644 --- a/.opencode/knowledge/architecture/quality-attributes.md +++ b/.opencode/knowledge/architecture/quality-attributes.md @@ -48,13 +48,12 @@ last-updated: 2026-04-29 ### Quality Attributes in Architecture Documents -When documenting quality attributes in `technical_design.md`: +When documenting quality attributes in `product_definition.md`: - Each attribute must link to an architectural decision that addresses it - Each architectural decision must link to an ADR - Priority order must be explicit (which attribute wins when they conflict) ## Related -- [[architecture/technical-design]] -- [[architecture/adr]] -- [[architecture/assessment]] \ No newline at end of file +- [[architecture/assessment]] +- [[architecture/adr]] \ No newline at end of file diff --git a/.opencode/knowledge/architecture/reconciliation.md b/.opencode/knowledge/architecture/reconciliation.md index 5cdbede5..17b0adff 100644 --- a/.opencode/knowledge/architecture/reconciliation.md +++ b/.opencode/knowledge/architecture/reconciliation.md @@ -38,7 +38,7 @@ last-updated: 2026-04-29 When a mismatch is found: 1. **Record the mismatch**: Which two documents, which specific items, and how they disagree. -2. **Determine which side changes**: If the architecture is wrong, update domain_model.md, technical_design.md, or the ADR. If the requirements are wrong, update the feature file or product definition. +2. **Determine which side changes**: If the architecture is wrong, update domain_model.md, product_definition.md, or the ADR. If the requirements are wrong, update the feature file or product definition. 3. **Update both documents**: Ensure the correction is reflected in all affected documents. 4. **Re-run the affected check**: Verify the mismatch is resolved. diff --git a/.opencode/knowledge/domain-modeling/domain-modeling.md b/.opencode/knowledge/domain-modeling/domain-modeling.md new file mode 100644 index 00000000..42c726a7 --- /dev/null +++ b/.opencode/knowledge/domain-modeling/domain-modeling.md @@ -0,0 +1,77 @@ +--- +domain: domain-modeling +tags: [ddd, bounded-contexts, entities, relationships, aggregates, domain-model] +last-updated: 2026-05-08 +--- + +# Domain Modeling + +## Key Takeaways + +- Domain modeling formalizes event storming candidates into bounded contexts, entities, relationships, aggregate boundaries, and a context map — the convergent synthesis after divergent exploration. +- Bounded contexts are linguistic boundaries where every term has one meaning. Formalize by clustering aggregates that share a ubiquitous language and separating where terms change meaning (Evans, 2003). +- Entities have identity and lifecycle (Slot, TrackedOrder, Position). Value objects have no identity — defined by attributes (Token, Pair, Orderbook). Only entities can be aggregate roots. +- Relationships capture composition (Orderbook composed of PriceLevels), dependency (Strategy uses MarketSnapshot), and domain flow (Strategy produces OrderAction). Cardinality (1:1, 1:N, M:N) constrains design. +- Aggregate boundaries define transactional consistency. Every invariant within an aggregate must hold after each transaction. Invariants spanning entities indicate they belong to the same aggregate. + +## Concepts + +**Formalization Process**: Event storming produces candidates (events, commands, aggregates, contexts). Domain modeling formalizes these into a structural specification. For each candidate context: identify entities from event/command payloads, classify as entities (have identity) or value objects (defined by attributes), determine relationships, and define aggregate boundaries from transactional consistency requirements. + +**Bounded Context Identification**: Contexts emerge from linguistic boundaries. A boundary exists when: the same term changes meaning, different consistency requirements apply, or independent deployment is needed. Each context documents: name, responsibility, key entities, business capability, why separate, and integration points. + +**Entity vs Value Object**: An entity has identity persisting across state changes (a Slot is the same Slot regardless of state). A value object has no identity — defined entirely by attribute values (PriceLevel at price=100, qty=5 equals any other with same values). Only entities can be aggregate roots. Value objects are always owned by an entity. + +**Relationship Extraction**: Relationships derive from: event flow (entity A produces an event entity B consumes), data flow (entity A composed from entity B's data), and domain constraints (a rule involves both entities). Three types: composition, dependency, domain flow. Cardinality constrains design: 1:1 enables direct reference, 1:N enables collection, M:N requires indirection. + +**Aggregate Boundary Determination**: An aggregate is the unit of transactional consistency. Group entities sharing invariants (business rules that must hold atomically). Split when: the aggregate exceeds memory, transactions span multiple aggregates, or different parts have different consistency requirements. One aggregate per transaction; cross-aggregate references use identity only. + +## Content + +### Formalization Steps + +1. **List entity candidates from events and commands**: Each event's subject is an entity candidate. Each command's target is an entity candidate. Each read model references entity candidates. Deduplicate across events. + +2. **Classify each candidate**: Entity if it has identity (ID or natural key) and lifecycle. Value object if defined by attributes and immutable. + +3. **Determine relationships**: For each pair, classify: composition (A contains B), dependency (A uses B), or domain flow (A produces output for B). Determine cardinality: 1:1, 1:N, M:N. + +4. **Define aggregate boundaries**: Group entities sharing invariants. Document root entity, invariants, and business reason for grouping per [[domain-modeling/event-storming#key-takeaways]]. + +5. **Identify context boundaries**: Group aggregates sharing a ubiquitous language. Document: name, responsibility, key entities, capability, why separate, integration points. + +6. **Map context relationships**: Classify each inter-context relationship per [[domain-modeling/context-mapping#key-takeaways]]. Document upstream, downstream, pattern, and any anti-corruption layer. + +### Entity Table Format + +| Field | Purpose | +|---|---| +| Name | PascalCase entity name | +| Type | Entity or Value Object | +| Description | What it represents | +| Bounded Context | Owning context | +| Aggregate Root? | Yes if root, — if value object | + +### Aggregate Boundary Table Format + +| Field | Purpose | +|---|---| +| Aggregate | PascalCase name | +| Root Entity | Identity root | +| Invariants | Business rules enforced | +| Why Grouped | Business reason for boundary | +| Bounded Context | Owning context | + +### Boundary Decision Heuristics + +- If an invariant references two entities, they share an aggregate +- If an aggregate exceeds memory, split and accept eventual consistency +- If two contexts use the same term with different definitions, they are separate contexts +- If removing one aggregate doesn't change term meanings in another, they are separate contexts +- If a rule can be checked eventually (not immediately), entities may be in different aggregates + +## Related + +- [[domain-modeling/event-storming]] +- [[domain-modeling/context-mapping]] +- [[requirements/ubiquitous-language]] diff --git a/.opencode/knowledge/domain-modeling/event-storming.md b/.opencode/knowledge/domain-modeling/event-storming.md index e33f33e8..e6425f17 100644 --- a/.opencode/knowledge/domain-modeling/event-storming.md +++ b/.opencode/knowledge/domain-modeling/event-storming.md @@ -1,58 +1,105 @@ --- domain: domain-modeling -tags: [ddd, event-storming, bounded-contexts, aggregates, domain-events] -last-updated: 2026-04-29 +tags: [ddd, event-storming, domain-events, commands, aggregates, brandolini] +last-updated: 2026-05-08 --- -# Event Storming & Domain Modeling +# Event Storming ## Key Takeaways -- Event storming surfaces domain events (past-tense verbs), commands (imperative verbs), and aggregates (transactional consistency boundaries) from stakeholder interviews (Brandolini, 2012). -- Bounded contexts group related events, commands, and entities. A context boundary is where a term changes meaning (Evans, 2003). -- Aggregates define transactional consistency boundaries (Evans, 2003). Everything within an aggregate must be consistent after a transaction; everything between aggregates is eventually consistent. -- Domain events are expressed in past tense (OrderPlaced, PaymentReceived); commands in imperative (PlaceOrder, ReceivePayment). +- Event storming is a structured brainstorming technique with six phases: chaotic exploration, timeline enforcement, hotspot identification, external system mapping, command mapping, and candidate grouping (Brandolini, 2012). +- Domain events are facts expressed in past tense (OrderPlaced, FillDetected). Extract from interview transcripts by scanning for business-relevant state changes and outcome statements. +- Commands are intents in imperative (PlaceOrder, DetectFill). Each command has an actor, preconditions, and produces zero or more events. Read models are the decision information needed before executing a command. +- Hotspots mark conflicts, ambiguities, or disagreements. They are not resolved during event storming — they are flagged for stakeholder follow-up and indicate context boundary candidates. +- Candidate bounded contexts emerge from clustering related events. Candidate aggregates emerge from grouping events that must be transactionally consistent. ## Concepts -**Event Storming** (Brandolini, 2012): A collaborative workshop technique where domain experts place domain events on a timeline. The process: identify events (what happens), identify commands (what triggers them), group into bounded contexts (areas of related meaning), and identify aggregates (consistency boundaries). Event storming produces: an event map, candidate bounded contexts, and candidate aggregates. +**Event Storming Phases** (Brandolini, 2012): The workshop proceeds through six phases of increasing structure. Chaotic exploration: brainstorm all domain events without ordering. Timeline enforcement: place events chronologically, adding missing events exposed by gaps. Hotspot identification: mark conflicts, ambiguities, and areas where stakeholders disagree. External systems: identify actors and systems outside the domain. Command mapping: for each event, identify the triggering command and the read model needed. Candidate grouping: cluster into aggregate and bounded context hypotheses. -**Bounded Contexts** (Evans, 2003): A bounded context is a linguistic boundary. Within it, every term has exactly one meaning. When the same word means different things in different parts of the domain, that's a context boundary. For example, "Product" might mean a catalog item in the Sales context and a physical item in the Warehouse context. +**Domain Event Extraction**: Events represent business-relevant state changes that have already occurred. Extract from interview transcripts by scanning for: outcome statements ("the order was filled"), state transitions ("the position changed from flat to long"), and time markers ("after the tick completes"). Each event gets a past-tense name in SubjectVerbEd format: OrderPlaced, FillDetected, SpreadCalculated. Events are facts — they cannot be undone, only compensated by subsequent events. -**Aggregates** (Evans, 2003): An aggregate is a cluster of domain objects treated as a single unit for data changes. Every aggregate has a root entity and a consistency boundary: all invariants must be satisfied within a single transaction. References from outside the aggregate point only to the root. Aggregates are the unit of transactional consistency. +**Commands and Read Models**: A command is an intent to change state, expressed in imperative (PlaceOrder, CancelOrder, DetectFill). Each command has: an actor (who or what triggers it), preconditions (what must be true), and produces zero or more events on success or rejection events on failure. A read model is the information needed to decide whether and how to execute a command. For PlaceOrder, the read model includes current orderbook, balances, and open orders. -**Domain Events**: Something that happened in the domain, expressed in past tense. Events are facts: they cannot be undone, only compensated. Events capture the vocabulary of the domain: OrderPlaced, PaymentReceived, InventoryDepleted. +**Hotspots**: Hotspots are marked when stakeholders disagree about an event's meaning, when two events seem contradictory, or when the same term is used for different concepts. Hotspots are NOT resolved during event storming — they are recorded as boundary candidates and deferred to stakeholder follow-up. The number and location of hotspots reveals where the domain is most complex. -**Commands**: An intent to make something happen, expressed in imperative. Commands may be rejected (insufficient funds, out of stock). When a command succeeds, it produces a domain event: PlaceOrder → OrderPlaced. +**Candidate Grouping**: After all events and commands are identified, group them into hypotheses: aggregates (events/commands that must be transactionally consistent) and bounded contexts (clusters sharing a ubiquitous language). This is hypothesis, not final — the domain-discovery step formalizes these candidates. ## Content -### Event Storming Steps +### Phase 1: Chaotic Exploration -1. Brainstorm domain events from interview notes (past-tense, business-relevant) -2. Place events on a chronological timeline -3. For each event, identify the command that triggers it -4. Group related events and commands into candidate bounded contexts -5. Within each context, identify aggregate boundaries: which entities must be transactionally consistent -6. Flag contradictions (same term, different meaning) as context boundaries -7. Flag gaps (events without commands, or commands without events) for follow-up +Goal: surface ALL domain events without judgment or ordering. -### Aggregate Design Rules +Extraction heuristics — scan interview transcripts for: +- Outcome verbs: "placed", "filled", "cancelled", "detected", "calculated", "exceeded" +- State transitions: "changed to", "moved from", "became" +- Business milestones: "completed", "started", "halted", "resumed" +- Exclude technical events (database writes, API calls) — focus on business events +- Include negative/rejection events: OrderRejected, InsufficientFunds -- An aggregate must fit in memory. If it's too large, split it. -- An aggregate must be consistent after every transaction. If invariants span two aggregates, merge them or accept eventual consistency -- References between aggregates use identity (ID), not object references -- One aggregate per transaction. If you need to update two aggregates atomically, reconsider your boundaries. +Naming convention: SubjectVerbEd in PascalCase. Examples: OrderPlaced, FillDetected, PositionOpened, KillSwitchActivated. -### Context Mapping (Evans, 2003) +### Phase 2: Timeline Enforcement -- Upstream/Downstream: one context provides data/services, the other consumes -- Anti-corruption layer: a translation boundary that prevents upstream concepts from leaking into downstream -- Conformist: downstream accepts upstream's model as-is -- Open-host service: upstream publishes a standardized protocol +Goal: place events in chronological order, which reveals gaps. + +1. Arrange events left-to-right on a timeline +2. For each adjacent pair, ask: "What must happen between these two?" +3. Insert any missing events exposed by gaps +4. Identify parallel events (events that happen simultaneously in different contexts) + +### Phase 3: Hotspot Identification + +Goal: mark conflicts and ambiguities for follow-up. + +Mark a hotspot when: +- Stakeholders use the same word for different concepts +- The timeline has contradictory events +- A decision has multiple valid outcomes +- An event's trigger is unclear or contested +- A domain rule is inconsistent across interviews + +Record each hotspot as: `[event/term] — [nature of conflict] — [stakeholders who disagree]`. + +### Phase 4: External Systems and Actors + +Goal: identify what triggers events from outside the domain. + +For each event without an internal trigger: +- Identify the external actor (user, exchange API, timer, external system) +- Note whether the actor is a source (provides data) or a trigger (initiates action) + +### Phase 5: Command Mapping + +Goal: for each event, identify the command and read model. + +For each event, determine: +- **Command**: What intent produces this event? (PlaceOrder → OrderPlaced) +- **Actor**: Who/what issues the command? +- **Read model**: What information does the actor need to decide? +- **Preconditions**: What must be true for success? +- **Rejection event**: What happens on failure? (PlaceOrderRejected) + +Edge cases: +- An event with no command is externally triggered +- A command may produce multiple events +- A command may produce no events if rejected + +### Phase 6: Aggregate and Context Candidates + +Goal: cluster into transactional boundaries and linguistic boundaries. + +**Aggregate candidates**: Group commands that must execute atomically. If two commands must always succeed or fail together, they belong to the same aggregate. If they can execute independently, they are separate aggregates. + +**Context candidates**: Group aggregates sharing a ubiquitous language. If the same term means different things in two groups, they are separate contexts. If removing one aggregate changes the meaning of terms in another, they are in the same context. + +Record as: `Candidate Aggregate: [name] — [events] — [commands] — [consistency reason]` and `Candidate Context: [name] — [aggregates] — [shared terms]`. ## Related +- [[domain-modeling/domain-modeling]] - [[domain-modeling/context-mapping]] - [[requirements/ubiquitous-language]] -- [[architecture/technical-design]] \ No newline at end of file +- [[requirements/interview-techniques]] diff --git a/.opencode/knowledge/requirements/feature-boundaries.md b/.opencode/knowledge/requirements/feature-boundaries.md new file mode 100644 index 00000000..5eb45e5a --- /dev/null +++ b/.opencode/knowledge/requirements/feature-boundaries.md @@ -0,0 +1,71 @@ +--- +domain: requirements +tags: [feature-boundaries, story-mapping, delivery-order, bounded-contexts, feature-naming] +last-updated: 2026-05-08 +--- + +# Feature Boundaries + +## Key Takeaways + +- Feature boundaries are derived from the delivery order in product_definition.md, validated against bounded context and aggregate boundaries from the domain model. Each delivery step becomes a feature candidate. +- A feature should belong to primarily one bounded context. If a delivery step spans two or more contexts, split it along context boundaries. +- A feature should not span multiple aggregate transactional consistency boundaries. If it does, split along aggregate lines. +- Feature names follow the `[Capability]` pattern from the delivery step. Descriptions answer: what it provides, which context it serves, why it exists, and key entities. +- Cross-cutting concerns (risk management, error handling, observability) are not separate features — they appear as Constraints in the features that implement them. + +## Concepts + +**Delivery Order as Backbone**: Patton (2014) recommends mapping the user's narrative flow as a backbone, then slicing vertically into releasable increments. The delivery order in product_definition.md is exactly this backbone: each step represents a cohesive capability the system must deliver. Using delivery steps as feature candidates ensures each feature is independently deliverable and follows the dependency graph. + +**Context Alignment Validation**: Each feature candidate must be checked against the domain model's bounded context table. A feature that touches entities from two or more contexts has a boundary problem. Split it: each context gets its own feature. The domain model's "Why Separate" column explains why the contexts were split — the feature split must respect the same reasoning. + +**Aggregate Boundary Validation**: Aggregates define transactional consistency boundaries. A feature that modifies data across two aggregates in one transaction violates aggregate design. If a delivery step spans multiple aggregates, split the feature so each aggregate's invariants are tested within one feature. + +**Naming and Description Convention**: Feature names come from the delivery step name, validated for clarity. Good names are specific enough that a developer knows what to build and a tester knows what to verify. Descriptions follow a four-part pattern: (1) what the feature provides, (2) which bounded context it serves, (3) why it exists — the business need, (4) key entities from the domain model that belong to this feature. + +**Cross-Cutting Concerns**: Risk management, error handling, logging, and observability span multiple contexts. These are not separate features. Instead, they appear as Constraints in the features where they are implemented. The domain model's context map shows which contexts have safety or error-handling responsibilities. Map those responsibilities to Constraints, not to separate features. + +**Exceptions to Context Splitting**: Three patterns justify a feature spanning multiple bounded contexts: (1) Foundational shared types (e.g., "Domain value objects") that all contexts depend on — these belong to a separate shared-kernel feature whose entities have `Domain (shared)` as their context. (2) Orchestrator contexts (e.g., "Execution engine") that coordinate multiple contexts but own no business logic — these are a single feature because splitting the orchestrator would create circular dependencies. (3) Tightly coupled co-deployed contexts (e.g., "Strategy framework" spanning Pricing + Strategy) that share a ubiquitous language and deployment boundary — these are one feature when their integration point is a single protocol call. + +## Content + +### Feature Boundary Derivation Process + +1. **List delivery steps as feature candidates**: Read product_definition.md delivery order. Each numbered step is a feature candidate. Record: step number, name, module, and summary. + +2. **Map each candidate to bounded contexts**: For each candidate, identify which bounded contexts its entities belong to using the domain model's entity table. If a candidate spans multiple contexts, split it. + +3. **Map each candidate to aggregates**: For each candidate, identify which aggregate boundaries its entities belong to using the domain model's aggregate boundary table. If a candidate spans multiple aggregates, validate that the feature does not require cross-aggregate transactions. If it does, split it. + +4. **Validate naming**: Each feature name should be a noun phrase that names a cohesive capability. Avoid vague names ("Core", "Utils", "Infrastructure" without qualification). Prefer specific names ("Domain value objects", "Data infrastructure", "Payment adapter"). + +5. **Write descriptions**: Each description answers four questions: What does this feature provide? Which bounded context does it serve? Why does it exist? Which key entities from the domain model belong to it? + +### Splitting Criteria + +When a delivery step spans multiple contexts or aggregates: + +| Signal | Split | Keep together | +|--------|-------|---------------| +| Spans 2+ bounded contexts | Split along context boundaries | Shared-kernel types (Domain shared), Orchestrator, Tightly coupled co-deployed | +| Spans 2+ aggregates | Split along aggregate boundaries | If aggregates must be transactionally consistent | +| Has >2 distinct concerns | Split into separate features per concern | If concerns are inseparable aspects of one capability | +| Delivery step name contains "and" | Likely two features | If "and" joins inseparable aspects | + +### Cross-Cutting Concern Mapping + +For quality attributes that span contexts (risk, error handling, observability): + +1. Check the domain model context map for which contexts participate in the cross-cutting concern +2. For each participating context, add a Constraint to that context's feature +3. Do NOT create a separate "Risk Management" feature — distribute the constraints + +Example: Kill switch behavior appears in Engine (lifecycle control) and Order Execution (cancel all). Both features get a Constraint referencing the kill switch quality attribute. + +## Related + +- [[requirements/feature-discovery]] +- [[requirements/rule-derivation]] +- [[requirements/decomposition]] +- [[domain-modeling/domain-modeling]] \ No newline at end of file diff --git a/.opencode/knowledge/requirements/feature-discovery.md b/.opencode/knowledge/requirements/feature-discovery.md index 6302d266..0b773552 100644 --- a/.opencode/knowledge/requirements/feature-discovery.md +++ b/.opencode/knowledge/requirements/feature-discovery.md @@ -1,40 +1,64 @@ --- domain: requirements tags: [feature-discovery, story-mapping, backlog-creation, gap-analysis] -last-updated: 2026-05-04 +last-updated: 2026-05-08 --- # Feature Discovery ## Key Takeaways -- Feature discovery synthesizes multiple analysis artifacts (domain model, event map, interview notes, delivery order, technical design) into coherent feature boundaries with scoped business rules. It is a genuine analysis step, not mechanical transcription. +- Feature discovery synthesizes analysis artifacts (domain model, delivery order, product definition, glossary) into coherent feature boundaries with scoped business rules. It is a genuine analysis step, not mechanical transcription. - Each feature captures coarse business rules: one-line statements of behavior that the feature must enforce or enable. These are behavioral hypotheses to be validated and refined. -- The PO must identify feature boundaries that respect bounded context borders, aggregate transactional boundaries, and module dependency order. Features that span aggregate boundaries or cross dependency lines are flagged for splitting. +- Feature boundaries respect bounded context borders, aggregate transactional boundaries, and module dependency order per [[requirements/feature-boundaries]]. Features that span boundaries are flagged for splitting. +- Rules are derived systematically from three sources: domain events, aggregate invariants, and commands per [[requirements/rule-derivation]]. Every rule traces to at least one domain model artifact. - Gaps discovered during feature discovery (a bounded context with no feature, a quality attribute with no enforcing feature, a domain event with no corresponding rule) are flagged, not silently filled. -- When artifacts are ambiguous, contradictory, or incomplete, the PO asks targeted clarification questions using the same interview techniques (CIT, laddering) as discovery interviews, but scoped to the specific feature boundary or rule under consideration. - Features have a lifecycle of increasing specificity: `Status: ELICITING` through discovery and breakdown, advancing to `BASELINED` after baseline confirmation. ## Concepts -**Feature Boundary Identification**: Deciding where one feature ends and another begins is a design judgment, not a mechanical step. Bounded contexts provide coarse boundaries, but the PO must decide granularity: too coarse and the feature is unmanageable; too fine and you lose cohesion. Patton (2014) recommends mapping the user's narrative flow as a backbone, then slicing vertically into releasable increments. Each slice should be independently deliverable and testable. Cross-reference the domain model's aggregate boundaries and the delivery order's dependency graph to validate that each feature is self-contained. +**Feature Boundary Identification**: Deciding where one feature ends and another begins is a design judgment using the delivery order as backbone (Patton, 2014), validated against bounded context and aggregate boundaries from the domain model. Each delivery step becomes a feature candidate; candidates spanning multiple contexts or aggregates are split per [[requirements/feature-boundaries]]. -**Rule Discovery as Hypothesis**: Coarse rules are hypotheses about what the system must do, derived by cross-referencing three sources: domain events ("what must happen when X occurs"), entity invariants ("what must always be true about Y"), and stakeholder goals ("what the user needs to accomplish"). These hypotheses are validated and refined across phases (coarse hypotheses first, validated rules later) preventing premature commitment to story-level detail while ensuring comprehensive coverage across the whole product (Cohn, 2004; Patton, 2014). +**Rule Discovery as Hypothesis**: Coarse rules are hypotheses about what the system must do, derived from three sources: domain events (behavioral rules), entity invariants (structural rules), and commands (action rules) per [[requirements/rule-derivation]]. These hypotheses are validated and refined across phases — coarse bullets first, formal user stories later — preventing premature commitment to story-level detail while ensuring comprehensive coverage (Cohn, 2004; Patton, 2014). -**Targeted Clarification During Discovery**: When synthesizing analysis artifacts into feature boundaries, gaps and contradictions naturally emerge. A delivery step may map to multiple aggregates with unclear ownership. An entity invariant may contradict what the interview notes say. A quality attribute may have no obvious enforcing mechanism. These are not failures of earlier interviews. They are expected consequences of zooming from domain-level understanding to feature-level specificity. Targeted questions use the same techniques as discovery interviews (CIT for specific failure incidents, laddering for "why does this matter?") but are narrower, focused on resolving a specific boundary question rather than exploring the whole domain. +**Targeted Clarification During Discovery**: When artifacts are ambiguous, contradictory, or incomplete, ask targeted clarification questions using the same interview techniques (CIT, laddering) as discovery interviews, but scoped to the specific feature boundary or rule under consideration. Record answers in the feature's Questions table. -**Gap Analysis**: Systematically verify coverage across three dimensions: (1) every bounded context from the domain model is covered by at least one feature, (2) every quality attribute from the product definition is enforced by at least one feature's constraints, and (3) every critical domain event is traceable to at least one business rule. Uncovered areas indicate missing features or gaps in the domain model itself. Flag both. +**Gap Analysis**: Systematically verify coverage across three dimensions: (1) every bounded context from the domain model is covered by at least one feature, (2) every quality attribute from the product definition is enforced by at least one feature's constraints, (3) every critical domain event is traceable to at least one business rule. Uncovered areas indicate missing features or gaps in the domain model itself. Flag both. **Feature Lifecycle**: Features follow a lifecycle of increasing specificity across phases: 1. **Discovery**: Feature boundaries identified, coarse business rules written, constraints scoped. Status: ELICITING. -2. **Breakdown**: Coarse rules expanded into full Rule blocks with As a/I want/So that format. INVEST validation applied. Targeted clarification may refine rules. Status remains ELICITING. +2. **Breakdown**: Coarse rules expanded into full Rule blocks with As a/I want/So that format. INVEST validation applied. Status remains ELICITING. 3. **Example Writing and Baseline**: Given/When/Then Examples written, pre-mortems applied, baseline confirmed. Status advances to BASELINED. +## Content + +### Discovery Sequence + +Feature discovery is two sequential activities: + +1. **Boundary identification** (discover-features skill): Use the delivery order as backbone. Map each step to bounded contexts and aggregates from the domain model. Split candidates that span contexts or aggregates. Name features and write descriptions per [[requirements/feature-boundaries]]. Create .feature files with title, description, Status: ELICITING, and an empty Questions table. + +2. **Rule derivation** (discover-rules skill): For each feature, assign domain model artifacts (entities, events, invariants, commands) based on bounded context membership. Derive behavioral rules from events, structural rules from invariants, and action rules from commands per [[requirements/rule-derivation]]. Map quality attributes to constraints. Write coarse Rules (Business) bullets and Constraints into each .feature file. + +### Gap Analysis Procedure + +After deriving rules for all features, verify: + +1. **Context coverage**: List every bounded context from the domain model. Check that each has at least one feature. If a context has no feature, flag it as a gap. +2. **Quality attribute enforcement**: List every quality attribute from product_definition.md. Check that each is enforced by at least one feature's Constraints. If a quality attribute has no enforcing feature, flag it as a gap. +3. **Event traceability**: List every critical domain event. Check that each is traceable to at least one business rule. If an event has no rule, flag it as a gap. +4. **Invariant traceability**: List every aggregate invariant. Check that each has at least one rule. If an invariant has no rule, add it. +5. **Command traceability**: List every command. Check that each has at least one rule. If a command has no rule, flag it as out of scope or missing. + +Gaps are recorded in the relevant feature's Questions table with status `Open`. Do NOT silently fill gaps with assumed rules. + ## Related +- [[requirements/feature-boundaries]]: deriving feature boundaries from delivery order and domain model +- [[requirements/rule-derivation]]: deriving business rules from events, invariants, and commands - [[requirements/invest]]: story quality criteria applied to rules - [[requirements/wsjf]]: feature prioritization applied to BASELINED features - [[requirements/gherkin]]: writing Examples from rules - [[requirements/interview-techniques]]: interview methods for clarification - [[requirements/decomposition]]: splitting Rules -- [[requirements/pre-mortem]]: adversarial analysis applied to rules +- [[requirements/pre-mortem]]: adversarial analysis applied to rules \ No newline at end of file diff --git a/.opencode/knowledge/requirements/rule-derivation.md b/.opencode/knowledge/requirements/rule-derivation.md new file mode 100644 index 00000000..763e1f3b --- /dev/null +++ b/.opencode/knowledge/requirements/rule-derivation.md @@ -0,0 +1,93 @@ +--- +domain: requirements +tags: [rule-derivation, business-rules, invariants, events, commands] +last-updated: 2026-05-08 +--- + +# Rule Derivation + +## Key Takeaways + +- Business rules are derived from three systematic sources: domain events (behavioral rules), aggregate invariants (structural rules), and commands (action rules). Every rule traces back to at least one domain model artifact. +- Event → Rule pattern: "When [event], then [consequence]." Each domain event in the domain model implies at least one behavioral rule about what must happen when that event occurs. +- Invariant → Rule pattern: "[Entity] must always [condition]." Each aggregate invariant in the domain model IS a business rule; record it as-is. +- Command → Rule pattern: "[Actor] can [action] when [precondition]." Each command implies what actors can do and under what conditions. +- Quality attributes from product_definition.md constrain rules: each attribute produces at least one Constraint that bounds the feature's behavior with a measurable threshold. + +## Concepts + +**Three Sources of Rules**: Business rules are not invented — they are derived from the domain model. Domain events produce behavioral rules (what happens when). Aggregate invariants produce structural rules (what must always be true). Commands produce action rules (who can do what, when). Cross-referencing all three sources ensures comprehensive coverage; missing rules indicate either a gap in the domain model or an implicit assumption that must be made explicit. + +**Event → Rule Derivation**: Each domain event (past-tense: OrderPlaced, FillDetected, KillSwitchActivated) implies rules about what must happen when that event occurs. The rule answers: what triggered this event? What must be true afterward? What must not happen? Example: "When a fill is detected, the tracked order must be updated before the next detection cycle." + +**Invariant → Rule Derivation**: Each aggregate invariant from the domain model's aggregate boundary table IS a business rule. Record it verbatim as a rule bullet, then refine into active voice during breakdown. Example invariant: "A tracked order must be atomically removed when cancelled or fully filled." This becomes a rule directly. + +**Command → Rule Derivation**: Each command (imperative: PlaceOrder, CancelOrder) implies rules about who can act and under what conditions. The rule answers: who can issue this command? What preconditions must hold? What happens on success? What happens on rejection? Example: "An operator can place a limit order when the spread exceeds the fee floor." + +**Quality Attribute → Constraint Mapping**: Each quality attribute in product_definition.md (latency, reliability, safety) constrains feature behavior. Map each attribute to the feature(s) responsible for enforcing it. If no feature enforces a quality attribute, it is a gap. Constraints include measurable thresholds: "Latency: tick-to-order under 100ms", "Reliability: no orphaned orders after crash", "Safety: kill switch halts all trading within 1 tick." + +**Traceability Matrix**: Every rule must trace back to at least one domain model artifact (event, invariant, or command). Every domain event, invariant, and command must trace forward to at least one rule. Gaps in either direction indicate missing rules or incomplete domain modeling. + +## Content + +### Derivation Procedure + +For each feature, starting with the feature that has the most entities from the domain model: + +**Step 1: Assign domain model artifacts to features.** Using the bounded context column in the domain model's entity table, assign each entity to the feature that corresponds to its context. Assign each aggregate invariant to the feature that contains the aggregate root. Assign each domain event and command to the feature corresponding to its bounded context. + +**Step 2: Derive behavioral rules from events.** For each event assigned to this feature: +- What triggers this event? → Rule about precondition +- What must happen after this event? → Rule about consequence +- What must NOT happen during/after this event? → Rule about prohibition +- Write each as a coarse bullet: "When [event], then [consequence]" + +**Step 3: Derive structural rules from invariants.** For each invariant assigned to this feature: +- Record the invariant verbatim as a rule bullet +- These are non-negotiable: they define the consistency boundary +- Write as: "[Entity] must always [condition]" + +**Step 4: Derive action rules from commands.** For each command assigned to this feature: +- Who can issue this command? → Rule about actor +- What preconditions must hold? → Rule about guard condition +- What happens on rejection? → Rule about failure handling +- Write as: "[Actor] can [action] when [precondition]" + +**Step 5: Map quality attributes to constraints.** For each quality attribute in product_definition.md: +- Which feature(s) enforce this attribute? → Add Constraint to those features +- Include measurable threshold from the quality attribute +- If no feature enforces it → add to Questions table as a gap + +### Traceability Verification + +After deriving rules for all features, verify: + +1. **Every event → at least one rule.** If an event has no rule, either the event is out of scope or a rule is missing. +2. **Every invariant → at least one rule.** If an invariant has no rule, add it. +3. **Every command → at least one rule.** If a command has no rule, either it's out of scope or a rule is missing. +4. **Every quality attribute → at least one constraint.** If a quality attribute has no enforcing feature, flag it as a gap. +5. **Every rule → at least one source artifact.** If a rule has no trace to events, invariants, or commands, it may be an assumption that needs validation. + +### Example Derivation + +From `FillDetected` event (Order Execution context): +- "When a fill is detected, the tracked order's filled quantity must be updated atomically" +- "When a fill is detected, the position must reflect the fill before the next tick" + +From `TrackedOrder` aggregate invariant: +- "A tracked order must be atomically removed when cancelled or fully filled" +- "Fill detection must not produce duplicate Fill records" + +From `PlaceOrder` command (Strategy → Order Execution): +- "An operator can place a limit order when the spread exceeds the fee floor" +- "A limit order must specify pair, side, price, and quantity" + +From `Safety` quality attribute → Engine feature constraint: +- "Kill switch must halt all trading within 1 tick cycle" + +## Related + +- [[requirements/feature-discovery]] +- [[requirements/feature-boundaries]] +- [[domain-modeling/event-storming]] +- [[domain-modeling/domain-modeling]] \ No newline at end of file diff --git a/.opencode/knowledge/requirements/ubiquitous-language.md b/.opencode/knowledge/requirements/ubiquitous-language.md index 591f6b5e..aaa9ae76 100644 --- a/.opencode/knowledge/requirements/ubiquitous-language.md +++ b/.opencode/knowledge/requirements/ubiquitous-language.md @@ -1,61 +1,79 @@ --- domain: requirements tags: [ubiquitous-language, glossary, ddd, genus-differentia] -last-updated: 2026-04-29 +last-updated: 2026-05-08 --- # Ubiquitous Language ## Key Takeaways -- Ubiquitous language is a shared vocabulary between domain experts and developers: the same word must mean the same thing to everyone (Evans, 2003). -- Definitions use genus-differentia format: state the category the term belongs to, then state how it differs from other members of that category. -- The glossary is append-only: never delete entries; mark them as retired with a reference to the replacement term. -- Aliases (different words for the same concept) must be documented so teams know when terms are interchangeable. +- Ubiquitous language is a shared vocabulary where every term has exactly one meaning within a bounded context (Evans, 2003). The same word must mean the same thing to everyone. +- Definitions use genus-differentia format: state the category, then the distinguishing characteristic. Example: "A Fill is an execution event that records price, quantity, and fee of a completed order match." +- Extract terms by scanning domain events, commands, entity names, and interview transcripts for domain-specific nouns and verbs carrying business meaning. +- The glossary is append-only: never delete entries; mark retired with a reference to the replacement. Aliases (different words for the same concept) must be documented. +- Cross-reference every term against the domain model and feature files. A term serving double duty across contexts indicates a missing context boundary. ## Concepts -**Ubiquitous Language**. A shared vocabulary between domain experts and developers where every term has exactly one meaning within a bounded context (Evans, 2003). The same word must mean the same thing to everyone. When a term changes meaning across contexts, that boundary must be made explicit. +**Ubiquitous Language**. A shared vocabulary between domain experts and developers where every term has exactly one meaning within a bounded context (Evans, 2003). When a term changes meaning across contexts, that boundary must be explicit in the domain model's context map. -**Genus-Differentia Format**. Every definition follows the pattern: "[Term] is a [genus/category] that [differentia/distinguishing characteristic]." For example: "A Repository is a collection-like interface that abstracts persistence behind a domain-oriented lookup." The genus (collection-like interface) places it in a known category; the differentia (abstracts persistence behind domain-oriented lookup) distinguishes it from other interfaces. +**Genus-Differentia Format**. Every definition follows: "[Term] is a [genus/category] that [differentia/distinguishing characteristic]." The genus places it in a known category; the differentia distinguishes it from other category members. -**Append-Only Glossary**. The glossary records every term the team uses. When understanding shifts and a term's definition changes, the old entry is marked retired (not deleted) and a new entry is written. This preserves the history of domain understanding and prevents confusion when old documents reference superseded terms. +**Term Extraction**. Terms come from three sources: domain events and commands (OrderPlaced → "Order", "Fill"), entity and value object names from the domain model, and interview transcripts (domain-specific nouns stakeholders emphasize or repeat). Technical terms (API, database) are excluded unless they carry domain meaning. -**Aliases**. When two words refer to the same concept (e.g., "Order" and "Purchase" in an e-commerce domain), both are documented with one marked as the primary term and the other as an alias. This prevents parallel vocabularies from forming. +**Append-Only Glossary**. When understanding shifts, the old entry is marked retired (not deleted) and a new entry is written. This preserves domain understanding history and prevents confusion with superseded terms. + +**Cross-Referencing**. Verify each glossary term against domain model and feature files. A term in glossary but not domain model is either a missing entity, a concept that should be added, or stale. A term in domain model but not glossary is an incomplete glossary. Cross-context ambiguity flags context boundaries. ## Content -### Definition Format +### Term Detection Heuristics + +Scan these sources for terms needing definition: + +1. **Event names**: Each contains 1-2 domain nouns (FillDetected → "Fill", "Fill Detection") +2. **Command names**: Domain verbs and nouns (PlaceOrder → "Order", "Order Placement") +3. **Entity names**: Every entity and value object is a term +4. **Interview nouns**: Domain-specific nouns stakeholders repeat, emphasize, or define +5. **Interview verbs**: Domain-specific verbs describing business actions (not "store", "compute") +6. **Qualifying adjectives**: "available" balance, "locked" balance, "orphaned" order — the adjective changes meaning -Each glossary entry contains: +Exclude: programming terms (class, method, dict), infrastructure terms (API, HTTP, JSON), generic terms (value, process) unless domain-specific. + +### Definition Format | Field | Purpose | |---|---| | Term | The word or phrase being defined | | Definition | Genus-differentia format | -| Aliases | Other words that mean the same thing | -| Example | A concrete usage in context | -| Source | Where the term originated (interview, document, etc.) | +| Aliases | Other words meaning the same thing | +| Example | Concrete usage in context | +| Source | Where the term originated | ### Retirement Process -When a term is superseded: - 1. Add `Status: Retired` to the existing entry 2. Add `Superseded by: ` with a reference 3. Write a new entry for the replacement term -4. Never delete. Retired entries remain for historical reference +4. Never delete — retired entries remain for historical reference + +### Cross-Reference Verification -### Cross-Referencing +1. **Glossary → Domain Model**: For each term, find it in entities, relationships, or context descriptions. If missing, flag it. +2. **Domain Model → Glossary**: For each entity name and relationship noun, find it in the glossary. If missing, add it. +3. **Glossary → Feature Files**: For each Phase 1 term, find it in at least one feature file. If missing, flag as potentially out of scope. +4. **Ambiguity detection**: If a term has different meanings in different contexts, document each meaning separately within its context and note the boundary. -After writing or updating definitions: +### Cross-Context Terms -- Verify each term matches how it's used in the domain model -- Verify each term matches how it's used in feature files -- Flag any term serving double duty across bounded contexts. This indicates a missing context boundary (Evans, 2003) +When the same word has different meanings in different bounded contexts: +- Document each meaning separately with its context +- Note the context boundary in the glossary entry +- Verify the domain model's context map reflects this boundary ## Related - [[domain-modeling/event-storming]] +- [[domain-modeling/domain-modeling]] - [[requirements/interview-techniques]] -- [[requirements/decomposition]] \ No newline at end of file diff --git a/.opencode/knowledge/skill-design/principles.md b/.opencode/knowledge/skill-design/principles.md index 7c586f62..401d8b9c 100644 --- a/.opencode/knowledge/skill-design/principles.md +++ b/.opencode/knowledge/skill-design/principles.md @@ -54,7 +54,7 @@ description: "" # -Available knowledge: [[domain/concept]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[domain/concept]]. `in` artifacts: read all before starting work. 1. 2. ← Link at point of use @@ -103,7 +103,7 @@ Every skill that has `in` artifacts follows two rules: 1. **Verify before proceeding.** Check that all `in` artifacts exist on disk. If any are missing, stop and flag the missing artifact rather than proceeding with assumed knowledge. This prevents the most common source of rework: building on assumptions about documents that don't exist or have been moved. -2. **Read on demand, not eagerly.** Discover what's available first (`ls`, `find`), then read only the files and sections needed for the current step. The `in` list defines what you *may* read, not what you *must* read upfront. Loading all `in` artifacts before starting wastes context and causes middle-position attention degradation (Liu et al., 2023). +2. **Read all `in` artifacts before starting work.** All `in` artifacts are mandatory context — read them in full before executing the skill's procedural steps. For wildcard patterns (`*.md`), list the directory first to discover what's available, then read all discovered files. The `in` list defines what you *must* read, not what you *may* optionally reference. ## Related diff --git a/.opencode/skills/accept-feature/SKILL.md b/.opencode/skills/accept-feature/SKILL.md index fbd210a2..6261473a 100644 --- a/.opencode/skills/accept-feature/SKILL.md +++ b/.opencode/skills/accept-feature/SKILL.md @@ -5,7 +5,7 @@ description: "Validate business behavior against BDD scenarios from the end user # Accept Feature -Available knowledge: [[requirements/gherkin#key-takeaways]], [[software-craft/test-design#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/gherkin#key-takeaways]], [[software-craft/test-design#key-takeaways]]. `in` artifacts: read all before starting work. 1. Run `task test-build` to verify all tests pass with coverage. 2. Verify all BDD scenarios pass from the end user's perspective, not the test harness, per [[software-craft/test-design#key-takeaways]]. diff --git a/.opencode/skills/analyze-root-cause/SKILL.md b/.opencode/skills/analyze-root-cause/SKILL.md index 5e9832f4..705d2598 100644 --- a/.opencode/skills/analyze-root-cause/SKILL.md +++ b/.opencode/skills/analyze-root-cause/SKILL.md @@ -5,7 +5,7 @@ description: "Investigate why the PR was rejected, identifying the failure point # Analyze Root Cause -Available knowledge: [[requirements/post-mortem#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/post-mortem#key-takeaways]]. `in` artifacts: read all before starting work. 1. Identify the failure point: which quality gate was missed per [[requirements/post-mortem#key-takeaways]]. 2. Determine whether the root cause is in planning, architecture, or implementation. diff --git a/.opencode/skills/assess-architecture/SKILL.md b/.opencode/skills/assess-architecture/SKILL.md index 78e3913f..ece7e7d8 100644 --- a/.opencode/skills/assess-architecture/SKILL.md +++ b/.opencode/skills/assess-architecture/SKILL.md @@ -5,7 +5,7 @@ description: "Evaluate whether the feature requires new architecture or fits the # Assess Architecture -Available knowledge: [[architecture/assessment#key-takeaways]], [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/assessment#key-takeaways]], [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: read all before starting work. 1. Check if architecture already exists per [[architecture/assessment#key-takeaways]]. 2. If architecture exists, verify the delivery mechanism per [[architecture/assessment#concepts]]. diff --git a/.opencode/skills/break-down-feature/SKILL.md b/.opencode/skills/break-down-feature/SKILL.md index a46b3c0a..151a95d2 100644 --- a/.opencode/skills/break-down-feature/SKILL.md +++ b/.opencode/skills/break-down-feature/SKILL.md @@ -5,9 +5,9 @@ description: "Refine coarse Rules into full Rule blocks with adversarial analysi # Break Down Feature -Available knowledge: [[requirements/invest]], [[requirements/decomposition]], [[requirements/pre-mortem#key-takeaways]], [[requirements/interview-techniques#concepts]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/invest]], [[requirements/decomposition]], [[requirements/pre-mortem#key-takeaways]], [[requirements/interview-techniques#concepts]]. `in` artifacts: read all before starting work. -1. Discover and read the feature file, product definition, technical design, domain model, and interview notes from `in`. The feature file contains coarse `Rules (Business)` bullet points from discovery. These are behavioral hypotheses, not validated stories. +1. Discover and read the feature file, product definition, domain model, glossary, and interview notes from `in`. The feature file contains coarse `Rules (Business)` bullet points from discovery. These are behavioral hypotheses, not validated stories. 2. For each coarse rule, apply adversarial analysis: - Pre-mortem per [[requirements/pre-mortem#key-takeaways]]: "Imagine this rule was built exactly as described, all tests pass, but it fails for the user. What would be missing?" - CIT per [[requirements/interview-techniques#concepts]]: "When has this behavior gone wrong in practice?" diff --git a/.opencode/skills/commit-implementation/SKILL.md b/.opencode/skills/commit-implementation/SKILL.md index b088ff9a..23272e31 100644 --- a/.opencode/skills/commit-implementation/SKILL.md +++ b/.opencode/skills/commit-implementation/SKILL.md @@ -5,7 +5,7 @@ description: "Commit the reviewed, passing implementation with traceability to f # Commit Implementation -Available knowledge: [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: read all before starting work. 1. Run `task test` and `ruff check .` to verify all tests pass and lint is clean before committing. 2. Commit with traceability per [[software-craft/git-conventions#content]]: use granular commit format with @id tags. diff --git a/.opencode/skills/conduct-interview/SKILL.md b/.opencode/skills/conduct-interview/SKILL.md index a09ddf06..35184654 100644 --- a/.opencode/skills/conduct-interview/SKILL.md +++ b/.opencode/skills/conduct-interview/SKILL.md @@ -5,7 +5,7 @@ description: "Interview stakeholders to elicit pain points, business goals, doma # Conduct Stakeholder Interview -Available knowledge: [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: read all before starting work. 1. Start with general questions per [[requirements/interview-techniques#concepts]]. 2. If general questions reveal multiple behaviour groups, probe each as a diff --git a/.opencode/skills/confirm-baseline/SKILL.md b/.opencode/skills/confirm-baseline/SKILL.md index 5f7dfa55..49c08567 100644 --- a/.opencode/skills/confirm-baseline/SKILL.md +++ b/.opencode/skills/confirm-baseline/SKILL.md @@ -5,7 +5,7 @@ description: "Confirm all planning artifacts are complete and the feature is rea # Confirm Baseline -Available knowledge: [[requirements/decomposition#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/decomposition#key-takeaways]]. `in` artifacts: read all before starting work. 1. Verify all Examples have `@id` tags. If any are missing, the feature is not ready for baseline. 2. Verify the feature passes decomposition checks per [[requirements/decomposition#key-takeaways]]: no more than 2 concerns, no more than 8 Must Examples. diff --git a/.opencode/skills/create-pr/SKILL.md b/.opencode/skills/create-pr/SKILL.md index ce8e7b9d..911f0421 100644 --- a/.opencode/skills/create-pr/SKILL.md +++ b/.opencode/skills/create-pr/SKILL.md @@ -5,7 +5,7 @@ description: "Push local main to remote and create an administrative PR for chan # Create PR -Available knowledge: [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: read all before starting work. 1. Push local main to remote: `git push origin main`. 2. Create a pull request with the squashed commit format from [[software-craft/git-conventions#content]], including @id traceability for all acceptance criteria. diff --git a/.opencode/skills/create-py-stubs/SKILL.md b/.opencode/skills/create-py-stubs/SKILL.md index 6b9b3b4a..90e09c1a 100644 --- a/.opencode/skills/create-py-stubs/SKILL.md +++ b/.opencode/skills/create-py-stubs/SKILL.md @@ -5,7 +5,7 @@ description: "Create minimum typed stubs and test stubs as domain model breadcru # Create Python Stubs -Available knowledge: [[architecture/technical-design]], [[software-craft/stub-design]], [[software-craft/tdd]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/technical-design]], [[software-craft/stub-design]], [[software-craft/tdd]]. `in` artifacts: read all before starting work. 1. Read the feature file and identify all `@id` tags and the domain entities, value objects, and use cases referenced by the Examples. 2. For each referenced entity/value object/use case not yet implemented, create a minimal typed stub per [[software-craft/stub-design#concepts]]: Protocol method signatures with `raise NotImplementedError` bodies, no docstrings, no type hints beyond the contract. These stubs are breadcrumbs from the domain model. The SE can add, remove, or modify them during implementation. diff --git a/.opencode/skills/decide-batch-action/SKILL.md b/.opencode/skills/decide-batch-action/SKILL.md index dfd61d66..a549ecc0 100644 --- a/.opencode/skills/decide-batch-action/SKILL.md +++ b/.opencode/skills/decide-batch-action/SKILL.md @@ -5,7 +5,7 @@ description: "Ask the stakeholder whether to publish the accumulated batch as a # Decide Batch Action -`in` artifacts: discover and read on demand as needed. +`in` artifacts: read all before starting work. 1. Present the stakeholder with the current state: how many features are on local main, whether integration tests pass, and what features remain in the backlog. 2. Ask: publish this batch as a PR, or continue accumulating features on local main? diff --git a/.opencode/skills/define-done/SKILL.md b/.opencode/skills/define-done/SKILL.md index 665c9c99..8f64d5d7 100644 --- a/.opencode/skills/define-done/SKILL.md +++ b/.opencode/skills/define-done/SKILL.md @@ -5,7 +5,7 @@ description: "Define the quality gates that must pass before the feature is cons # Define Done -Available knowledge: [[software-craft/code-review#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/code-review#key-takeaways]]. `in` artifacts: read all before starting work. 1. Define quality gates per [[software-craft/code-review#key-takeaways]]: design correctness, test quality, and conventions. 2. Incorporate quality attributes from the product definition into the gates. diff --git a/.opencode/skills/define-product-scope/SKILL.md b/.opencode/skills/define-product-scope/SKILL.md index 29235509..5def17a2 100644 --- a/.opencode/skills/define-product-scope/SKILL.md +++ b/.opencode/skills/define-product-scope/SKILL.md @@ -5,7 +5,7 @@ description: "Define what the product IS and IS NOT, who the users are, and the # Define Product Scope -Available knowledge: [[architecture/quality-attributes#key-takeaways]], [[requirements/pre-mortem#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/quality-attributes#key-takeaways]], [[requirements/pre-mortem#key-takeaways]]. `in` artifacts: read all before starting work. 1. Define product scope per the domain model and glossary. 2. Define quality attributes per [[architecture/quality-attributes#concepts]]. diff --git a/.opencode/skills/define-ubiquitous-language/SKILL.md b/.opencode/skills/define-ubiquitous-language/SKILL.md index ee385b9d..a16a2b28 100644 --- a/.opencode/skills/define-ubiquitous-language/SKILL.md +++ b/.opencode/skills/define-ubiquitous-language/SKILL.md @@ -1,12 +1,17 @@ --- name: define-ubiquitous-language -description: "Formalize the ubiquitous language by defining domain terms into a glossary" +description: "Extract domain terms from the domain model and interview notes, define them in genus-differentia format, and cross-reference against domain model and features" --- # Define Ubiquitous Language -Available knowledge: [[requirements/ubiquitous-language]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/ubiquitous-language]]. `in` artifacts: read all before starting work. -1. For each candidate term, write a genus-differentia definition per [[requirements/ubiquitous-language#key-takeaways]]. -2. Cross-reference with existing glossary entries: mark retired terms rather than deleting per [[requirements/ubiquitous-language#key-takeaways]]. -3. Identify aliases (different words for the same concept) and document them per [[requirements/ubiquitous-language]]. +1. Read `domain_model.md` and all interview notes. If `glossary.md` already exists, read it for cumulative editing. +2. Extract terms per [[requirements/ubiquitous-language#content]]: scan domain events, commands, entity names, and interview nouns/verbs for domain-specific terms carrying business meaning. +3. Filter out technical and generic terms per [[requirements/ubiquitous-language#content]] exclusion rules. +4. For each term, write a genus-differentia definition per [[requirements/ubiquitous-language#key-takeaways]]. Document aliases where multiple words refer to the same concept. +5. If existing glossary entries conflict with new understanding, retire the old entry per [[requirements/ubiquitous-language#key-takeaways]] rather than deleting. +6. Cross-reference per [[requirements/ubiquitous-language#content]]: verify glossary → domain model, domain model → glossary, glossary → feature files. Flag any gaps. +7. If a term has different meanings across bounded contexts, document each meaning separately within its context and note the context boundary. +8. Write all definitions into `glossary.md`. If the file already exists, edit cumulatively — preserve valid entries, update based on new information. diff --git a/.opencode/skills/design-assets/SKILL.md b/.opencode/skills/design-assets/SKILL.md index 96fcaedf..5cbd260d 100644 --- a/.opencode/skills/design-assets/SKILL.md +++ b/.opencode/skills/design-assets/SKILL.md @@ -5,7 +5,7 @@ description: "Create logo and banner using favicon-first, monochrome-first, prog # Design Assets -Available knowledge: [[design/project-assets#key-takeaways]], [[design/visual-harmony#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[design/project-assets#key-takeaways]], [[design/visual-harmony#key-takeaways]]. `in` artifacts: read all before starting work. 1. Read `docs/branding.md` and extract the personality adjectives, visual metaphor (if any), and colour palette from the Visual section. 2. Determine the logo type per [[design/identity-design#concepts]]: combination mark (new brands), abstract mark (established names), pictogram (strong visual metaphor), or letterform (compact avatar). diff --git a/.opencode/skills/design-colors/SKILL.md b/.opencode/skills/design-colors/SKILL.md index a4939865..720fed48 100644 --- a/.opencode/skills/design-colors/SKILL.md +++ b/.opencode/skills/design-colors/SKILL.md @@ -5,7 +5,7 @@ description: "Select and validate a colour palette with WCAG contrast, dark-mode # Design Colours -Available knowledge: [[design/color-systems#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[design/color-systems#key-takeaways]]. `in` artifacts: read all before starting work. 1. Read `docs/branding.md` and extract the personality adjectives from the Identity section. 2. Propose a primary hue based on the hue-semantics table in [[design/color-systems#content]]. The primary must reinforce the personality adjectives. diff --git a/.opencode/skills/design-technical-solution/SKILL.md b/.opencode/skills/design-technical-solution/SKILL.md index a5581042..2dd2aae9 100644 --- a/.opencode/skills/design-technical-solution/SKILL.md +++ b/.opencode/skills/design-technical-solution/SKILL.md @@ -1,21 +1,16 @@ --- name: design-technical-solution -description: "Design the technical solution: architectural style, stack, module structure, API/event contracts, interface definitions" +description: "Select technology stack, document dependencies, and route architecturally significant decisions to ADRs" --- # Design Technical Solution -Available knowledge: [[architecture/quality-attributes#key-takeaways]], [[architecture/technical-design#key-takeaways]], [[architecture/contract-design#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/quality-attributes#key-takeaways]], [[architecture/technical-design#key-takeaways]]. `in` artifacts: read all before starting work. 1. Rank quality attributes by business priority per [[architecture/quality-attributes#concepts]]. 2. Select architectural style per the quality-attribute-to-style mapping in [[architecture/quality-attributes#concepts]]. -3. Define the stack. -4. Define module structure per [[architecture/technical-design#concepts]]. -5. For each integration point in the context map, define a contract per - [[architecture/contract-design#concepts]]. -6. Draw C4 diagrams per [[architecture/technical-design#concepts]]. -7. Document dependencies and configuration keys. -8. Update system overview sections to reflect the current design. -9. If a decision is architecturally significant per [[architecture/adr#key-takeaways]], - route to needs_decisions. +3. Define the technology stack and write to product_definition.md. +4. Document dependency rationale and write to product_definition.md. +5. If a decision is architecturally significant per [[architecture/adr#key-takeaways]], + the draft-adr skill (next in this state's dispatch) will document it as an ADR. diff --git a/.opencode/skills/determine-action-items/SKILL.md b/.opencode/skills/determine-action-items/SKILL.md index d77c44ee..32b897a2 100644 --- a/.opencode/skills/determine-action-items/SKILL.md +++ b/.opencode/skills/determine-action-items/SKILL.md @@ -5,7 +5,7 @@ description: "Determine whether the feature needs replanning, architecture chang # Determine Action Items -Available knowledge: [[requirements/post-mortem#concepts]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/post-mortem#concepts]]. `in` artifacts: read all before starting work. 1. Determine routing per [[requirements/post-mortem#concepts]]. 2. Update the post-mortem with the restart check per [[requirements/post-mortem#key-takeaways]]. diff --git a/.opencode/skills/discover-features/SKILL.md b/.opencode/skills/discover-features/SKILL.md index 8f55ad8e..89ad4c1e 100644 --- a/.opencode/skills/discover-features/SKILL.md +++ b/.opencode/skills/discover-features/SKILL.md @@ -1,22 +1,18 @@ --- name: discover-features -description: "Synthesize analysis artifacts into .feature files with coherent boundaries, business rules, and constraints" +description: "Identify feature boundaries from the delivery order, validated against bounded contexts and aggregate boundaries" --- # Discover Features -Available knowledge: [[requirements/feature-discovery#concepts]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/feature-boundaries]], [[requirements/feature-discovery#concepts]]. `in` artifacts: read all before starting work. -1. Read the product definition, domain model, technical design, and interview notes. -2. Map delivery order steps to bounded contexts and aggregate boundaries. IF a delivery step spans multiple aggregates → flag for potential split. IF multiple delivery steps share one aggregate → they may belong together. -3. For each feature boundary, cross-reference domain events, entity invariants, and interview findings to identify business rules. -4. IF artifacts are ambiguous, contradictory, or incomplete for a feature boundary or business rule → ask the stakeholder targeted questions using CIT and laddering per [[requirements/interview-techniques#concepts]]. Record answers in the feature's Questions table. -5. Derive coarse `Rules (Business)` bullets from the synthesized understanding: one per behavioral hypothesis. -6. For each feature, identify applicable Constraints from the product definition's quality attributes. -7. Run gap analysis per [[requirements/feature-discovery#concepts]]: - - Every bounded context covered by at least one feature? - - Every quality attribute enforced by at least one feature? - - Every critical domain event traceable to a rule? - IF any gap is found → flag it. Do NOT silently fill gaps with assumed rules. -8. Create a `.feature` file from the template at `.templates/docs/features/feature.feature.template` for each feature with title, description, Status: ELICITING, Rules (Business), and Constraints. -9. Do NOT write full `Rule:` blocks (As a/I want/So that) or `Example:` blocks. Those require the adversarial analysis of breakdown. +1. Read product_definition.md, domain_model.md, and glossary.md from `in` artifacts. +2. List the delivery order steps from product_definition.md. Each step is a feature candidate per [[requirements/feature-boundaries#key-takeaways]]. +3. For each candidate, map it to bounded contexts using the domain model's entity table. IF a candidate spans multiple contexts → flag for splitting per [[requirements/feature-boundaries#key-takeaways]]. +4. For each candidate, map it to aggregate boundaries using the domain model's aggregate boundary table. IF a candidate requires cross-aggregate transactions → flag for splitting per [[requirements/feature-boundaries#key-takeaways]]. +5. Name each feature per [[requirements/feature-boundaries#content]]: use the delivery step name, validated for clarity and specificity. +6. Write a description for each feature per [[requirements/feature-boundaries#content]]: what it provides, which context it serves, why it exists, key entities. +7. Identify cross-cutting quality attributes from product_definition.md that will become Constraints — note which features they distribute to per [[requirements/feature-boundaries#content]] — but do NOT write Constraints yet; discover-rules will write them. +8. Create a `.feature` file from the template at `.templates/docs/features/feature.feature.template` for each feature with title, description, Status: ELICITING, and an empty Questions table. Do NOT write Rules (Business) or Constraints — those come from the discover-rules skill. +9. Run context coverage gap analysis per [[requirements/feature-discovery#content]]: every bounded context covered by at least one feature? IF any gap → add a Questions entry flagging it. \ No newline at end of file diff --git a/.opencode/skills/discover-rules/SKILL.md b/.opencode/skills/discover-rules/SKILL.md new file mode 100644 index 00000000..11aa9f2d --- /dev/null +++ b/.opencode/skills/discover-rules/SKILL.md @@ -0,0 +1,17 @@ +--- +name: discover-rules +description: "Derive business rules and constraints from domain model artifacts (events, invariants, commands) and map them to feature files" +--- + +# Discover Rules + +Available knowledge: [[requirements/rule-derivation]], [[requirements/feature-discovery#concepts]]. `in` artifacts: read all before starting work. + +1. Read product_definition.md, domain_model.md, glossary.md, and all `.feature` files (created by discover-features in this same state) from `in` artifacts. +2. Assign domain model artifacts to features per [[requirements/rule-derivation#content]]: using the bounded context column in the domain model's entity table, assign each entity, event, and command to the feature corresponding to its context. +3. For each feature, derive behavioral rules from domain events per [[requirements/rule-derivation#key-takeaways]]: "When [event], then [consequence]." Write each as a coarse bullet under `Rules (Business)`. +4. For each feature, derive structural rules from aggregate invariants per [[requirements/rule-derivation#key-takeaways]]: "[Entity] must always [condition]." Write each as a coarse bullet under `Rules (Business)`. +5. For each feature, derive action rules from commands per [[requirements/rule-derivation#key-takeaways]]: "[Actor] can [action] when [precondition]." Write each as a coarse bullet under `Rules (Business)`. +6. For each quality attribute in product_definition.md, map it to the feature(s) that enforce it per [[requirements/rule-derivation#key-takeaways]]. Write each as a Constraint with a measurable threshold. +7. Run traceability verification per [[requirements/rule-derivation#content]]: every event → at least one rule, every invariant → at least one rule, every command → at least one rule, every quality attribute → at least one constraint. IF any gap → flag it in the feature's Questions table. Do NOT silently fill gaps with assumed rules. +8. Write all Rules (Business) bullets and Constraints into each `.feature` file. Do NOT write full `Rule:` blocks (As a/I want/So that) or `Example:` blocks — those require the adversarial analysis of breakdown. \ No newline at end of file diff --git a/.opencode/skills/discover-rules/discover-rules/SKILL.md b/.opencode/skills/discover-rules/discover-rules/SKILL.md new file mode 100644 index 00000000..11aa9f2d --- /dev/null +++ b/.opencode/skills/discover-rules/discover-rules/SKILL.md @@ -0,0 +1,17 @@ +--- +name: discover-rules +description: "Derive business rules and constraints from domain model artifacts (events, invariants, commands) and map them to feature files" +--- + +# Discover Rules + +Available knowledge: [[requirements/rule-derivation]], [[requirements/feature-discovery#concepts]]. `in` artifacts: read all before starting work. + +1. Read product_definition.md, domain_model.md, glossary.md, and all `.feature` files (created by discover-features in this same state) from `in` artifacts. +2. Assign domain model artifacts to features per [[requirements/rule-derivation#content]]: using the bounded context column in the domain model's entity table, assign each entity, event, and command to the feature corresponding to its context. +3. For each feature, derive behavioral rules from domain events per [[requirements/rule-derivation#key-takeaways]]: "When [event], then [consequence]." Write each as a coarse bullet under `Rules (Business)`. +4. For each feature, derive structural rules from aggregate invariants per [[requirements/rule-derivation#key-takeaways]]: "[Entity] must always [condition]." Write each as a coarse bullet under `Rules (Business)`. +5. For each feature, derive action rules from commands per [[requirements/rule-derivation#key-takeaways]]: "[Actor] can [action] when [precondition]." Write each as a coarse bullet under `Rules (Business)`. +6. For each quality attribute in product_definition.md, map it to the feature(s) that enforce it per [[requirements/rule-derivation#key-takeaways]]. Write each as a Constraint with a measurable threshold. +7. Run traceability verification per [[requirements/rule-derivation#content]]: every event → at least one rule, every invariant → at least one rule, every command → at least one rule, every quality attribute → at least one constraint. IF any gap → flag it in the feature's Questions table. Do NOT silently fill gaps with assumed rules. +8. Write all Rules (Business) bullets and Constraints into each `.feature` file. Do NOT write full `Rule:` blocks (As a/I want/So that) or `Example:` blocks — those require the adversarial analysis of breakdown. \ No newline at end of file diff --git a/.opencode/skills/document-post-mortem/SKILL.md b/.opencode/skills/document-post-mortem/SKILL.md index 1dd62fc6..ad4c6cbf 100644 --- a/.opencode/skills/document-post-mortem/SKILL.md +++ b/.opencode/skills/document-post-mortem/SKILL.md @@ -5,6 +5,6 @@ description: "Record what failed, why, and which quality gate was missed" # Document Post-Mortem -Available knowledge: [[requirements/post-mortem]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/post-mortem]]. `in` artifacts: read all before starting work. 1. Record what failed, why, and which quality gate was missed per [[requirements/post-mortem#key-takeaways]]. diff --git a/.opencode/skills/domain-discovery/SKILL.md b/.opencode/skills/domain-discovery/SKILL.md new file mode 100644 index 00000000..fa2659e4 --- /dev/null +++ b/.opencode/skills/domain-discovery/SKILL.md @@ -0,0 +1,18 @@ +--- +name: domain-discovery +description: "Convergent synthesis: formalize event storming candidates into bounded contexts, entities, relationships, aggregate boundaries, and context map" +--- + +# Domain Discovery + +Available knowledge: [[domain-modeling/domain-modeling]], [[domain-modeling/context-mapping#key-takeaways]]. `in` artifacts: read all before starting work. + +1. Read `domain_model.md` (written by facilitate-event-storming in the same state) and all interview notes. +2. List entity candidates from event subjects, command targets, and read model references per [[domain-modeling/domain-modeling#content]]. +3. Classify each candidate as Entity (has identity + lifecycle) or Value Object (defined by attributes, immutable) per [[domain-modeling/domain-modeling#key-takeaways]]. +4. Determine relationships between entities: composition, dependency, or domain flow. Assign cardinality (1:1, 1:N, M:N) per [[domain-modeling/domain-modeling#key-takeaways]]. +5. Define aggregate boundaries per [[domain-modeling/domain-modeling#key-takeaways]]: group entities sharing invariants. Document root entity, invariants, and business reason for each grouping. +6. Identify context boundaries: group aggregates sharing a ubiquitous language. A boundary exists where terms change meaning, consistency requirements differ, or independent deployment is needed. +7. Map context relationships per [[domain-modeling/context-mapping#key-takeaways]]: classify each inter-context relationship (OHS, Conformist, Customer-Supplier, Partnership, ACL) and document translation rules. +8. Write formalized sections into `domain_model.md`: **bounded_contexts**, **entities**, **relationships**, **aggregate_boundaries**, **context_map**. If the file already has these sections from a prior iteration, edit them cumulatively — preserve valid content, update based on new information. +9. Add a **Changes** entry recording what was formalized and why. diff --git a/.opencode/skills/draft-adr/SKILL.md b/.opencode/skills/draft-adr/SKILL.md index 361e0381..b88dc8fc 100644 --- a/.opencode/skills/draft-adr/SKILL.md +++ b/.opencode/skills/draft-adr/SKILL.md @@ -1,13 +1,12 @@ --- name: draft-adr -description: "Document architecturally significant decisions as ADRs and record key decisions in technical_design.md" +description: "Document architecturally significant decisions as ADRs" --- # Draft ADR -Available knowledge: [[architecture/adr#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/adr#key-takeaways]]. `in` artifacts: read all before starting work. 1. Identify architecturally significant decisions per [[architecture/adr#concepts]]. 2. For each significant decision, write an ADR per [[architecture/adr#concepts]]. 3. For each ADR, assess risks per [[architecture/adr#concepts]]. -4. Record key decisions and active constraints in technical_design.md. diff --git a/.opencode/skills/extract-lessons/SKILL.md b/.opencode/skills/extract-lessons/SKILL.md index 037a9174..550dc932 100644 --- a/.opencode/skills/extract-lessons/SKILL.md +++ b/.opencode/skills/extract-lessons/SKILL.md @@ -5,7 +5,7 @@ description: "Determine the corrective fix and update the post-mortem with remed # Extract Lessons -Available knowledge: [[requirements/post-mortem#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/post-mortem#key-takeaways]]. `in` artifacts: read all before starting work. 1. Determine the corrective fix per [[requirements/post-mortem#key-takeaways]]. 2. Update the post-mortem with remediation steps. diff --git a/.opencode/skills/facilitate-event-storming/SKILL.md b/.opencode/skills/facilitate-event-storming/SKILL.md index bfd9a8dd..cd92cbfa 100644 --- a/.opencode/skills/facilitate-event-storming/SKILL.md +++ b/.opencode/skills/facilitate-event-storming/SKILL.md @@ -1,14 +1,18 @@ --- name: facilitate-event-storming -description: "Facilitate an event storming workshop to surface domain events, commands, and aggregate candidates" +description: "Divergent exploration: extract domain events, commands, read models, and hotspot candidates from interview notes using Brandolini's event storming technique" --- # Facilitate Event Storming -Available knowledge: [[domain-modeling/event-storming#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[domain-modeling/event-storming]]. `in` artifacts: read all before starting work. -1. Identify domain events (past-tense verbs) from interview data per [[domain-modeling/event-storming#key-takeaways]]. -2. Chronologically order events on a timeline. -3. Identify commands (imperative verbs) that trigger each event per [[domain-modeling/event-storming#key-takeaways]]. -4. Group events and commands into candidate bounded contexts per [[domain-modeling/event-storming#key-takeaways]]. -5. Identify aggregate candidates per [[domain-modeling/event-storming#key-takeaways]]. +1. If `domain_model.md` already exists, read it — this is a cumulative artifact across iterations. +2. Read all interview notes from `in` artifacts. +3. Execute chaotic exploration per [[domain-modeling/event-storming#content]]: extract all domain events from interview data using extraction heuristics. Name each event in SubjectVerbEd PascalCase. +4. Enforce timeline: arrange events chronologically. For each gap, insert missing events exposed by the ordering. +5. Identify hotspots per [[domain-modeling/event-storming#content]]: mark conflicts, ambiguities, contradictions, and unclear triggers. Record each as `[event/term] — [conflict nature] — [source]`. +6. Map external systems: identify actors and systems outside the domain that trigger events. +7. Map commands per [[domain-modeling/event-storming#content]]: for each event, determine the command, actor, read model, preconditions, and rejection event. +8. Form candidate groupings per [[domain-modeling/event-storming#content]]: cluster into aggregate candidates (transactional consistency) and context candidates (linguistic boundary). +9. Write findings into `domain_model.md`: update or create the **summary** section with domain overview. Write the **Events and Commands** section with Domain Events and Commands tables per [[domain-modeling/event-storming#content]]. Add a **Changes** entry recording what was discovered. diff --git a/.opencode/skills/implement-minimum/SKILL.md b/.opencode/skills/implement-minimum/SKILL.md index 2b4272e0..fbe954c4 100644 --- a/.opencode/skills/implement-minimum/SKILL.md +++ b/.opencode/skills/implement-minimum/SKILL.md @@ -5,8 +5,8 @@ description: "Write the minimum production code needed to make the failing test # Implement Minimum -Available knowledge: [[software-craft/tdd]], [[software-craft/test-design]], [[software-craft/smell-catalogue]], [[software-craft/object-calisthenics]], [[software-craft/solid]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/tdd]], [[software-craft/test-design]], [[software-craft/smell-catalogue]], [[software-craft/object-calisthenics]], [[software-craft/solid]]. `in` artifacts: read all before starting work. 1. Write the minimum code to make the failing test pass AND satisfy reviewer checks per [[software-craft/tdd#key-takeaways]]. Add docstrings, type hints, and lint compliance only when reviewers require them, not proactively. -2. IF a spec gap or inconsistency is discovered during implementation → do NOT modify specification documents (domain_model.md, technical_design.md, glossary.md, product_definition.md, context_map.md, ADRs, feature files). These are owned by other flow states. Flag the gap in output notes. The SE may ONLY modify production code and test code. +2. IF a spec gap or inconsistency is discovered during implementation → do NOT modify specification documents (domain_model.md, glossary.md, product_definition.md, ADRs, feature files). These are owned by other flow states. Flag the gap in output notes. The SE may ONLY modify production code and test code. 3. Run `task test-fast` to confirm the test passes (GREEN). diff --git a/.opencode/skills/map-contexts/SKILL.md b/.opencode/skills/map-contexts/SKILL.md index 602c7160..b7f5ec99 100644 --- a/.opencode/skills/map-contexts/SKILL.md +++ b/.opencode/skills/map-contexts/SKILL.md @@ -1,16 +1,15 @@ --- name: map-contexts -description: "Map bounded context relationships, integration points, and anti-corruption layers" +description: "Map bounded context relationships, Vernon patterns, and anti-corruption layers into domain_model.md" --- # Map Contexts -Available knowledge: [[domain-modeling/context-mapping#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[domain-modeling/context-mapping#key-takeaways]]. `in` artifacts: read all before starting work. 1. For each pair of interacting bounded contexts, select a relationship pattern per [[domain-modeling/context-mapping#concepts]]. 2. Draw a context map diagram showing all relationships. -3. For each cross-context interaction, define an integration point per - [[domain-modeling/context-mapping#concepts]]. -4. If a downstream context needs isolation from an upstream model, design an +3. If a downstream context needs isolation from an upstream model, design an anti-corruption layer per [[domain-modeling/context-mapping#concepts]]. +4. Write the context map section into domain_model.md (relationships table, diagram, ACL table). diff --git a/.opencode/skills/merge-local/SKILL.md b/.opencode/skills/merge-local/SKILL.md index cd3dafa6..ade178d6 100644 --- a/.opencode/skills/merge-local/SKILL.md +++ b/.opencode/skills/merge-local/SKILL.md @@ -5,7 +5,7 @@ description: "Squash-merge feature commits into local main, pull remote main, an # Merge Local -Available knowledge: [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: read all before starting work. 1. Pull latest remote main: `git fetch origin main && git merge --ff-only origin/main` into local main. 2. If remote main has diverged, rebase the feature branch on updated main before squash-merging. diff --git a/.opencode/skills/model-domain/SKILL.md b/.opencode/skills/model-domain/SKILL.md index 8e23539e..fac4c2bb 100644 --- a/.opencode/skills/model-domain/SKILL.md +++ b/.opencode/skills/model-domain/SKILL.md @@ -5,7 +5,7 @@ description: "Formalize candidates into bounded contexts, entities, relationship # Model Domain -Available knowledge: [[domain-modeling/event-storming#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[domain-modeling/event-storming#key-takeaways]]. `in` artifacts: read all before starting work. 1. Define bounded contexts per [[domain-modeling/event-storming#key-takeaways]]. 2. Define entities within each context: name, attributes, lifecycle. diff --git a/.opencode/skills/refactor/SKILL.md b/.opencode/skills/refactor/SKILL.md index c3a6a8a1..90df556b 100644 --- a/.opencode/skills/refactor/SKILL.md +++ b/.opencode/skills/refactor/SKILL.md @@ -5,7 +5,7 @@ description: "Improve code structure while keeping all tests passing, then cycle # Refactor -Available knowledge: [[software-craft/tdd]], [[software-craft/refactoring]], [[software-craft/object-calisthenics]], [[software-craft/smell-catalogue]], [[software-craft/refactoring-techniques]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/tdd]], [[software-craft/refactoring]], [[software-craft/object-calisthenics]], [[software-craft/smell-catalogue]], [[software-craft/refactoring-techniques]]. `in` artifacts: read all before starting work. 1. Review the code for improvement opportunities while keeping all tests passing per [[software-craft/tdd#concepts]]. 2. Refactor only if there is a test that would break if the refactoring is wrong per [[software-craft/tdd#key-takeaways]]. @@ -21,6 +21,6 @@ Available knowledge: [[software-craft/tdd]], [[software-craft/refactoring]], [[s 12. IF Data Clumps → Introduce Parameter Object per [[software-craft/smell-catalogue#concepts]]. 13. IF Shotgun Surgery or Divergent Change → Extract Class per [[software-craft/smell-catalogue#concepts]]. 14. IF no improvement is needed → skip refactoring and proceed to the next test. -15. IF a spec gap or inconsistency is discovered during refactoring → do NOT modify specification documents (domain_model.md, technical_design.md, glossary.md, product_definition.md, context_map.md, ADRs, feature files). Flag it in output notes. The SE may ONLY modify production code and test code. +15. IF a spec gap or inconsistency is discovered during refactoring → do NOT modify specification documents (domain_model.md, glossary.md, product_definition.md, ADRs, feature files). Flag it in output notes. The SE may ONLY modify production code and test code. 16. Commit refactor changes separately from feature changes per [[software-craft/git-conventions#concepts]]. 17. Run `task test-fast` to confirm all tests remain green after refactoring. diff --git a/.opencode/skills/review-architecture/SKILL.md b/.opencode/skills/review-architecture/SKILL.md index 13108276..ec6ee006 100644 --- a/.opencode/skills/review-architecture/SKILL.md +++ b/.opencode/skills/review-architecture/SKILL.md @@ -5,7 +5,7 @@ description: "Independently verify architecture alignment with domain model and # Review Architecture -Available knowledge: [[architecture/reconciliation#key-takeaways]], [[architecture/adr#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/reconciliation#key-takeaways]], [[architecture/adr#key-takeaways]]. `in` artifacts: read all before starting work. 1. Declare adversarial stance per [[architecture/reconciliation#concepts]]. 2. Run cross-document consistency checks per [[architecture/reconciliation#concepts]]. @@ -13,4 +13,4 @@ Available knowledge: [[architecture/reconciliation#key-takeaways]], [[architectu 4. Verify architectural style satisfies quality attribute priorities per [[architecture/quality-attributes#concepts]]. 5. If any inconsistency is found, resolve per [[architecture/reconciliation#concepts]]. -6. When flagging issues, include file:line references (e.g., "technical_design.md:34 contradicts domain_model.md:12"). Vague findings create rework. +6. When flagging issues, include file:line references (e.g., "product_definition.md:34 contradicts domain_model.md:12"). Vague findings create rework. diff --git a/.opencode/skills/review-conventions/SKILL.md b/.opencode/skills/review-conventions/SKILL.md index 20a5efe2..49822703 100644 --- a/.opencode/skills/review-conventions/SKILL.md +++ b/.opencode/skills/review-conventions/SKILL.md @@ -5,7 +5,7 @@ description: "Verify formatting, docstrings, type hints, and lint rules" # Review Conventions -Available knowledge: [[requirements/ubiquitous-language]], [[software-craft/code-review]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/ubiquitous-language]], [[software-craft/code-review]]. `in` artifacts: read all before starting work. 1. This review tier runs after design and structure review have passed. The SE addresses convention findings only at this stage. The SE does not proactively run lint, format, or type checks during the TDD cycle. 2. Declare fail-fast stance per [[software-craft/code-review#concepts]]: stop at the first failure. diff --git a/.opencode/skills/review-design/SKILL.md b/.opencode/skills/review-design/SKILL.md index 1e70f3ad..04d0a134 100644 --- a/.opencode/skills/review-design/SKILL.md +++ b/.opencode/skills/review-design/SKILL.md @@ -5,7 +5,7 @@ description: "Verify implementation aligns with domain model, architectural deci # Review Design -Available knowledge: [[architecture/reconciliation]], [[architecture/adr]], [[software-craft/code-review]], [[software-craft/refactoring]], [[software-craft/object-calisthenics]], [[software-craft/smell-catalogue]], [[software-craft/design-patterns]], [[software-craft/solid]], [[software-craft/tdd]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/reconciliation]], [[architecture/adr]], [[software-craft/code-review]], [[software-craft/refactoring]], [[software-craft/object-calisthenics]], [[software-craft/smell-catalogue]], [[software-craft/design-patterns]], [[software-craft/solid]], [[software-craft/tdd]]. `in` artifacts: read all before starting work. 1. This review tier checks design correctness ONLY. Do not flag lint, coverage, docstring, or naming issues. Those belong to structure or conventions review. 2. Declare adversarial stance per [[software-craft/code-review#concepts]]: default hypothesis: "it might be broken despite green tests." diff --git a/.opencode/skills/review-structure/SKILL.md b/.opencode/skills/review-structure/SKILL.md index 4a52e716..3f3efddf 100644 --- a/.opencode/skills/review-structure/SKILL.md +++ b/.opencode/skills/review-structure/SKILL.md @@ -5,7 +5,7 @@ description: "Verify test coverage, test quality, and behavior-vs-implementation # Review Structure -Available knowledge: [[software-craft/test-design]], [[software-craft/tdd]], [[software-craft/code-review]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/test-design]], [[software-craft/tdd]], [[software-craft/code-review]]. `in` artifacts: read all before starting work. 1. This review tier checks test quality and coverage ONLY. Do not flag lint, docstring, or naming issues. Those belong to conventions review. 2. Declare adversarial stance per [[software-craft/code-review#concepts]]: default hypothesis: "tests might be coupled to the wrong thing." diff --git a/.opencode/skills/select-feature/SKILL.md b/.opencode/skills/select-feature/SKILL.md index 7ede19e6..4a1623a0 100644 --- a/.opencode/skills/select-feature/SKILL.md +++ b/.opencode/skills/select-feature/SKILL.md @@ -5,12 +5,12 @@ description: "Select the next feature to develop: by delivery order for the firs # Select Feature -Available knowledge: [[requirements/wsjf]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/wsjf]]. `in` artifacts: read all before starting work. 1. Discover available features by listing `docs/features/` (or the project's feature directory). 2. IF no feature files exist → exit via `no_features`; features need discovery first. 3. IF more than one feature has `Status: BASELINED` → stop; WIP limit is 1. -4. Verify that architecture covers the candidate features by checking `technical_design.md` for relevant module structure. +4. Verify that architecture covers the candidate features by checking `domain_model.md` for relevant bounded contexts and `product_definition.md` for technology stack. 5. IF features have `Status: ELICITING` (no BASELINED features yet) → this is a first-run selection: - Select the first feature by delivery order from `product_definition.md`. The delivery order was established during discovery and already reflects business priority and technical dependencies. - Skip WSJF scoring: there's nothing to compare against. diff --git a/.opencode/skills/setup-apply/SKILL.md b/.opencode/skills/setup-apply/SKILL.md index ed51b2b7..fca6927f 100644 --- a/.opencode/skills/setup-apply/SKILL.md +++ b/.opencode/skills/setup-apply/SKILL.md @@ -5,7 +5,7 @@ description: "Apply text substitutions, rename package directory, and write temp # Setup Apply -`in` artifacts: discover and read on demand as needed. +`in` artifacts: read all before starting work. 1. Rename the package directory: `mv app {package_name}` 2. Apply every substitution from `template-config.yaml` substitutions section in order: diff --git a/.opencode/skills/setup-assess/SKILL.md b/.opencode/skills/setup-assess/SKILL.md index 2185ef33..7a9abe2d 100644 --- a/.opencode/skills/setup-assess/SKILL.md +++ b/.opencode/skills/setup-assess/SKILL.md @@ -5,7 +5,7 @@ description: "Interview user to understand project needs and assess parameters f # Setup Assess -Available knowledge: [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: read all before starting work. 1. Use interview techniques per [[requirements/interview-techniques#concepts]] to understand project context. 2. Start with general questions (Funnel Level 1): diff --git a/.opencode/skills/setup-branding/SKILL.md b/.opencode/skills/setup-branding/SKILL.md index 5c1d673b..22ce648c 100644 --- a/.opencode/skills/setup-branding/SKILL.md +++ b/.opencode/skills/setup-branding/SKILL.md @@ -5,7 +5,7 @@ description: "Interview stakeholder to establish brand identity: personality, vi # Setup Branding -Available knowledge: [[design/identity-design#key-takeaways]], [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[design/identity-design#key-takeaways]], [[requirements/interview-techniques#key-takeaways]]. `in` artifacts: read all before starting work. 1. Ask the stakeholder for the project name (if not already set in `pyproject.toml`). 2. Ask for a one-sentence tagline describing what the project does. diff --git a/.opencode/skills/setup-configure/SKILL.md b/.opencode/skills/setup-configure/SKILL.md index a24a91f4..81bfa52d 100644 --- a/.opencode/skills/setup-configure/SKILL.md +++ b/.opencode/skills/setup-configure/SKILL.md @@ -7,7 +7,7 @@ description: "Gather and confirm project parameters, validate template files exi Template resolution: templates live in `.templates/`. The instance path is the template path with `.templates/` prefix removed and `.template` suffix removed. Discover templates at runtime with `find .templates -name '*.template'`. Some templates contain `{variable}` tokens (e.g. `{project_name}`, `{YYYYMMDD}`) that the setup skill replaces with actual values. -`in` artifacts: discover and read on demand as needed. +`in` artifacts: read all before starting work. 1. Check that all required template files exist and set evidence: - `pyproject_toml`: Check `pyproject.toml` exists diff --git a/.opencode/skills/setup-verify/SKILL.md b/.opencode/skills/setup-verify/SKILL.md index 3168bd18..d17864fc 100644 --- a/.opencode/skills/setup-verify/SKILL.md +++ b/.opencode/skills/setup-verify/SKILL.md @@ -5,7 +5,7 @@ description: "Verify transformations, clean template artifacts, and finalize the # Setup Verify -`in` artifacts: discover and read on demand as needed. +`in` artifacts: read all before starting work. 1. Run smoke test: `uv sync --all-extras && uv run task test-fast` 2. IF smoke test fails: diff --git a/.opencode/skills/structure-project/SKILL.md b/.opencode/skills/structure-project/SKILL.md index e617b523..98c72bb9 100644 --- a/.opencode/skills/structure-project/SKILL.md +++ b/.opencode/skills/structure-project/SKILL.md @@ -5,7 +5,7 @@ description: "Create project skeleton: branch, package directories, port interfa # Structure Project -Available knowledge: [[architecture/technical-design#key-takeaways]], [[software-craft/stub-design]], [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[architecture/technical-design#key-takeaways]], [[software-craft/stub-design]], [[software-craft/git-conventions#key-takeaways]]. `in` artifacts: read all before starting work. 1. Create feature branch per [[software-craft/git-conventions#content]]: `feat/` from latest main. 2. Create package structure per [[architecture/technical-design#key-takeaways]]: directories, `__init__.py` files, port interfaces (Protocol abstractions from hexagonal architecture), and aggregate root class signatures. diff --git a/.opencode/skills/verify-traceability/SKILL.md b/.opencode/skills/verify-traceability/SKILL.md index edd708d8..e68c9e5d 100644 --- a/.opencode/skills/verify-traceability/SKILL.md +++ b/.opencode/skills/verify-traceability/SKILL.md @@ -5,7 +5,7 @@ description: "Verify 1-1 correspondence between @id tags in the feature file and # Verify Traceability -Available knowledge: [[software-craft/test-design#key-takeaways]], [[requirements/gherkin#key-takeaways]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/test-design#key-takeaways]], [[requirements/gherkin#key-takeaways]]. `in` artifacts: read all before starting work. 1. Extract all `@id` tags from the feature file. 2. Extract all test function names from `tests/features//`. diff --git a/.opencode/skills/write-bdd-features/SKILL.md b/.opencode/skills/write-bdd-features/SKILL.md index 7fb5d074..b97c1b93 100644 --- a/.opencode/skills/write-bdd-features/SKILL.md +++ b/.opencode/skills/write-bdd-features/SKILL.md @@ -5,7 +5,7 @@ description: "Write concrete Given/When/Then BDD scenarios for each user story u # Write BDD Features -Available knowledge: [[requirements/gherkin]], [[requirements/moscow]], [[requirements/pre-mortem]], [[requirements/decomposition]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[requirements/gherkin]], [[requirements/moscow]], [[requirements/pre-mortem]], [[requirements/decomposition]]. `in` artifacts: read all before starting work. 1. Discover and read the feature file, product definition, domain model, and glossary from `in`. 2. Run a pre-mortem per [[requirements/pre-mortem]] for each Rule before writing any Examples. All Rules must have their pre-mortems completed before any Examples are written. diff --git a/.opencode/skills/write-test/SKILL.md b/.opencode/skills/write-test/SKILL.md index 7dbc6b0f..7cfacea1 100644 --- a/.opencode/skills/write-test/SKILL.md +++ b/.opencode/skills/write-test/SKILL.md @@ -5,9 +5,9 @@ description: "Write a failing test body for one BDD example" # Write Test -Available knowledge: [[software-craft/tdd]], [[software-craft/test-design]], [[software-craft/smell-catalogue]], [[software-craft/object-calisthenics]], [[software-craft/solid]]. `in` artifacts: discover and read on demand as needed. +Available knowledge: [[software-craft/tdd]], [[software-craft/test-design]], [[software-craft/smell-catalogue]], [[software-craft/object-calisthenics]], [[software-craft/solid]]. `in` artifacts: read all before starting work. 1. Pick the next unimplemented `@id` from the feature file: order by fewest dependencies first per [[software-craft/tdd#concepts]]. 2. Write a failing test that specifies the expected behavior per [[software-craft/tdd#key-takeaways]]. Preserve the full docstring from the test stub. The Gherkin steps (Given/When/Then) are immutable specification content for traceability and must not be removed, shortened, or reformatted. -3. IF a spec gap or inconsistency is discovered → do NOT modify specification documents (domain_model.md, technical_design.md, glossary.md, product_definition.md, context_map.md, ADRs, feature files). Flag it in output notes. The SE may ONLY modify production code and test code. +3. IF a spec gap or inconsistency is discovered → do NOT modify specification documents (domain_model.md, glossary.md, product_definition.md, ADRs, feature files). Flag it in output notes. The SE may ONLY modify production code and test code. 4. Run `task test-fast` to confirm the test fails for the right reason (RED) per [[software-craft/tdd#key-takeaways]]. diff --git a/.templates/docs/adr/ADR_YYYYMMDD_.md.template b/.templates/docs/adr/ADR_YYYYMMDD_.md.template similarity index 97% rename from .templates/docs/adr/ADR_YYYYMMDD_.md.template rename to .templates/docs/adr/ADR_YYYYMMDD_.md.template index 13c9cbaf..7901e19d 100644 --- a/.templates/docs/adr/ADR_YYYYMMDD_.md.template +++ b/.templates/docs/adr/ADR_YYYYMMDD_.md.template @@ -1,4 +1,4 @@ -# ADR_YYYYMMDD_ +# ADR_YYYYMMDD_ ## Status diff --git a/.templates/docs/interview-notes/IN_YYYYMMDD_.md.template b/.templates/docs/interview-notes/IN_YYYYMMDD_.md.template similarity index 95% rename from .templates/docs/interview-notes/IN_YYYYMMDD_.md.template rename to .templates/docs/interview-notes/IN_YYYYMMDD_.md.template index 91a911c6..913ac194 100644 --- a/.templates/docs/interview-notes/IN_YYYYMMDD_.md.template +++ b/.templates/docs/interview-notes/IN_YYYYMMDD_.md.template @@ -1,4 +1,4 @@ -# IN_YYYYMMDD_ +# IN_YYYYMMDD_ > **Status:** IN-PROGRESS | COMPLETE > **Interviewer:** PO @@ -19,7 +19,7 @@ | Q6 | Failure — what must never happen? | ... | | Q7 | Out-of-scope — what are we explicitly not building? | ... | -## +## Domain Questions | ID | Question | Answer | |----|----------|--------| diff --git a/.templates/docs/post-mortem/PM_YYYYMMDD_.md.template b/.templates/docs/post-mortem/PM_YYYYMMDD_.md.template similarity index 87% rename from .templates/docs/post-mortem/PM_YYYYMMDD_.md.template rename to .templates/docs/post-mortem/PM_YYYYMMDD_.md.template index d97b55a0..f9c6f5bc 100644 --- a/.templates/docs/post-mortem/PM_YYYYMMDD_.md.template +++ b/.templates/docs/post-mortem/PM_YYYYMMDD_.md.template @@ -1,4 +1,4 @@ -# PM_YYYYMMDD_: +# PM_YYYYMMDD_: ## Failed At diff --git a/.templates/docs/spec/context_map.md.template b/.templates/docs/spec/context_map.md.template deleted file mode 100644 index 8c6ac600..00000000 --- a/.templates/docs/spec/context_map.md.template +++ /dev/null @@ -1,47 +0,0 @@ -# Context Map: - -> DDD context map showing relationships between bounded contexts. -> Updated by the Software Architect when contexts or relationships change. -> Follows the DDD strategic design patterns for inter-context relationships. - ---- - -## Context Relationships - -| Upstream Context | Downstream Context | Relationship Pattern | Translation / Anti-Corruption Layer | -|-----------------|-------------------|---------------------|-------------------------------------| -| `` | `` | | | - ---- - -## Context Map Diagram - -```mermaid -graph LR - ContextA[Context A] -->|Customer-Supplier| ContextB[Context B] - ContextB -->|ACL| ContextC[Context C] -``` - ---- - -## Integration Points - -| Integration | From | To | Mechanism | Contract | -|-------------|------|----|-----------|----------| -| | | | | | - ---- - -## Anti-Corruption Layers - -| ACL | Protects Context | From Context | Translation Rules | -|-----|-----------------|--------------|-------------------| -| `` | `` | `` | | - ---- - -## Changes - -| Date | Source | Change | Reason | -|------|--------|--------|--------| -| YYYY-MM-DD | | | | diff --git a/.templates/docs/spec/domain_model.md.template b/.templates/docs/spec/domain_model.md.template index b85fbb49..334324b9 100644 --- a/.templates/docs/spec/domain_model.md.template +++ b/.templates/docs/spec/domain_model.md.template @@ -4,7 +4,7 @@ > Updated by the Domain Expert when domain understanding evolves. > This document captures what code cannot express: WHY entities exist, HOW aggregates are bounded, and WHAT business capabilities each context serves. > -> **Evolving document:** Event Storming produces event_storming.md (workshop draft). Domain Modeling then reads that draft and formalizes it into the Bounded Contexts, Entities, Relationships, and Aggregate Boundaries sections below. +> **Evolving document:** Domain Modeling formalizes understanding into the sections below. The Bounded Contexts, Events and Commands, Entities, Relationships, Aggregate Boundaries, and Context Map sections are the canonical structural spec. --- @@ -22,6 +22,22 @@ --- +## Events and Commands + +### Domain Events + +| Event | Bounded Context | Description | Trigger Command | +|-------|-----------------|-------------|-----------------| +| `` | `` | | `` or External | + +### Commands + +| Command | Bounded Context | Actor | Read Model | Produces Event(s) | Rejection Event | +|---------|-----------------|-------|------------|---------------------|-------------------| +| `` | `` | | | `` | `Rejected>` or None | + +--- + ## Entities | Name | Type | Description | Bounded Context | Aggregate Root? | @@ -47,6 +63,29 @@ --- +## Context Map + + + +| Upstream Context | Downstream Context | Relationship Pattern | Translation / Anti-Corruption Layer | +|-----------------|-------------------|---------------------|-------------------------------------| +| `` | `` | | | + +### Context Map Diagram + +```mermaid +graph TB + +``` + +### Anti-Corruption Layers + +| ACL | Protects Context | From Context | Translation Rules | +|-----|-----------------|--------------|-------------------| +| `` | | | | + +--- + ## Changes | Date | Source | Change | Reason | diff --git a/.templates/docs/spec/event_storming.md.template b/.templates/docs/spec/event_storming.md.template deleted file mode 100644 index cde7693d..00000000 --- a/.templates/docs/spec/event_storming.md.template +++ /dev/null @@ -1,47 +0,0 @@ -# Event Storming: - -> Workshop output from the Event Storming session. -> Produced by the Domain Expert during the facilitate-event-storming skill. -> This is an intermediate artifact — candidates here are formalized by Domain Modeling into domain_model.md. - ---- - -## Event Map - -### Domain Events - -| Event | Description | Trigger | Bounded Context | -|-------|-------------|---------|-----------------| -| `` | | | | - -### Commands - -| Command | Description | Produces Event | Actor | -|---------|-------------|----------------|-------| -| `` | | `` | | - -### Read Models - -| Read Model | Description | Consumes Event | Used By | -|------------|-------------|----------------|---------| -| `` | | `` | | - ---- - -## Context Candidates - -> Tentative context boundaries identified during the workshop. Formalized in domain_model.md Bounded Contexts by Domain Modeling. - -| Candidate | Responsibility | Grouped Aggregates | Notes | -|-----------|---------------|--------------------|-------| -| `` | | ``, `` | | - ---- - -## Aggregate Candidates - -> Tentative aggregate boundaries identified during the workshop. Formalized in domain_model.md Aggregate Boundaries by Domain Modeling. - -| Candidate | Events Grouped | Tentative Root Entity | Notes | -|-----------|---------------|-----------------------|-------| -| `` | ``, `` | `` | | diff --git a/.templates/docs/spec/product_definition.md.template b/.templates/docs/spec/product_definition.md.template index ebc4a079..a54236b2 100644 --- a/.templates/docs/spec/product_definition.md.template +++ b/.templates/docs/spec/product_definition.md.template @@ -124,6 +124,26 @@ All criteria must be met before a feature is considered done. --- +## Technology Stack + +> Version constraints in `pyproject.toml` are the source of truth. This table records technology choices and rationale only. + +| Layer | Technology | Rationale | +|-------|-----------|-----------| +| | | | + +--- + +## Dependencies + +> Version constraints in `pyproject.toml` are the source of truth. This table records why each dependency was chosen. + +| Dependency | What it provides | Why not replaced | +|------------|------------------|-----------------| +| `` | | | + +--- + ## Scope Changes | Date | Session | Change | Reason | diff --git a/.templates/docs/spec/technical_design.md.template b/.templates/docs/spec/technical_design.md.template deleted file mode 100644 index 86642018..00000000 --- a/.templates/docs/spec/technical_design.md.template +++ /dev/null @@ -1,151 +0,0 @@ -# Technical Design: - -> Technical design document for the current feature or initiative. -> Updated by the Software Architect when stack, contracts, or interfaces change. -> Contract-first design: API and event schemas are defined here before implementation begins. - ---- - -## Feature - - - ---- - -## Architectural Style - -**Style:** - -**Rationale:** - ---- - -## Quality Attributes - -| Attribute | Architectural Decision | ADR Ref | -|-----------|----------------------|---------| -| | | | - ---- - -## Stack - -| Layer | Technology | Version | Rationale | -|-------|-----------|---------|-----------| -| Language | Python | 3.x | | -| Framework | | | | -| Database | | | | -| Messaging | | | | - ---- - -## Module Structure - -``` -src/ - / - domain/ # Entities, value objects, aggregates, domain services - application/ # Use cases, application services - infrastructure/ # Repositories, external service adapters - api/ # Entry points (CLI, REST, gRPC) -``` - ---- - -## API Contracts - -### - -**Method:** GET | POST | PUT | DELETE | - -**Path:** `/` - -**Request:** -```json -{ - "field": "type" -} -``` - -**Response:** -```json -{ - "field": "type" -} -``` - -**Errors:** -| Code | Meaning | -|------|---------| -| | | - ---- - -## Event Contracts - -### - -**Schema:** -```json -{ - "event_type": "", - "aggregate_id": "uuid", - "payload": {} -} -``` - -**Produced by:** `/` -**Consumed by:** `/` - ---- - -## Interface Definitions - -### - -```python -class (Protocol): - def (self, : ) -> : ... -``` - ---- - -## C4 Diagrams - - - ---- - -## Active Constraints - -- - ---- - -## Key Decisions - -- - ---- - -## Dependencies - -| Dependency | What it provides | Why not replaced | -|------------|------------------|-----------------| -| `` | | | - ---- - -## Configuration Keys - -| Key | Type | Default | Description | -|-----|------|---------|-------------| -| `` | string | `""` | | - ---- - -## Changes - -| Date | Source | Change | Reason | -|------|--------|--------|--------| -| YYYY-MM-DD | | | | diff --git a/.templates/tests/features/_test.py.template b/.templates/tests/features/_test.py.template similarity index 82% rename from .templates/tests/features/_test.py.template rename to .templates/tests/features/_test.py.template index 9bb188d5..02f16e08 100644 --- a/.templates/tests/features/_test.py.template +++ b/.templates/tests/features/_test.py.template @@ -2,7 +2,7 @@ import pytest @pytest.mark.skip(reason="not yet implemented") -def test__() -> None: +def test__() -> None: """ Given When diff --git a/AGENTS.md b/AGENTS.md index dfe59c64..0a0a0a40 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -19,9 +19,9 @@ Post-mortem analysis shows these practices prevent most project failures. Violat ## Artifact Templates When creating a document, use the template in `.templates/` that matches the artifact type. Strip the `.templates/` prefix and `.template` suffix to determine the destination path. For example: -- `.templates/docs/adr/ADR_YYYYMMDD_.md.template` → `docs/adr/ADR_20260430_my-decision.md` -- `.templates/docs/features/feature.feature.template` → `docs/features/my-feature.feature` -- `.templates/docs/interview-notes/IN_YYYYMMDD_.md.template` → `docs/interview-notes/IN_20260430_session-management.md` +- `.templates/docs/adr/ADR_YYYYMMDD_.md.template` → `docs/adr/ADR_20260430_my_decision.md` +- `.templates/docs/features/feature.feature.template` → `docs/features/my_feature.feature` +- `.templates/docs/interview-notes/IN_YYYYMMDD_.md.template` → `docs/interview-notes/IN_20260430_session_management.md` If no template exists for an artifact type, create the document without one. @@ -73,10 +73,24 @@ Artifact names in `in` and `out` lists use these conventions: | Pattern | Meaning | Example | |---------|---------|---------| | `filename.md` | A specific document | `domain_model.md`, `product_definition.md` | -| `dir/.ext` | A specific instance identified by parameter | `features/.feature`, `interview-notes/.md`, `adr/.md` | +| `dir/.ext` | A specific instance identified by parameter | `features/.feature`, `interview-notes/.md`, `adr/.md` | | `dir/*.ext` | Multiple documents of that type available in `in` | `interview-notes/*.md`, `adr/*.md` | | `conceptual_name` | A runtime artifact that passes between states within a flow | `typed_source_stubs`, `test_implementations` | +### Placeholder Naming Convention + +Placeholders in template filenames and flow artifact paths use the `` pattern where **type** identifies the document kind and **_id** signals snake_case formatting: + +| Placeholder | Document type | Format | Example value | +|---|---|---|---| +| `` | Feature file | snake_case | `domain_value_objects`, `data_export` | +| `` | Architectural decision record | snake_case | `protocol_adapters`, `read_write_split` | +| `` | Interview session | snake_case | `user_authentication`, `risk_requirements` | +| `` | Post-mortem | snake_case | `gherkin_removal`, `invest_artifact_pollution` | +| `` | BDD rule test | snake_case | `token_identity`, `order_lifecycle` | + +**File naming rule:** All filenames use **snake_case** (e.g., `domain_value_objects.feature`, `ADR_20260504_protocol_adapters.md`). **Doc folders** use kebab-case for multi-word names (e.g., `interview-notes/`, `post-mortem/`). **Python/test folders** use snake_case (e.g., `tests/features/`). + **Wildcards (`*`)** in `in` indicate that multiple documents of that type are available. List the directory contents first, then read selectively based on the task. When a state creates a single instance, use a `` name instead. **Runtime artifacts** (not backed by files) use descriptive names that make their purpose clear: `typed_source_stubs` (source files with type signatures only), `test_skeletons` (test files with structure only), `test_implementations` (tests with bodies), `source_implementations` (production code with behavior), `refactored_source` (code after refactoring pass), `feature_commits` (git commits for one feature), `merged_commits` (commits merged to local main), `root_cause_analysis` (analysis findings). @@ -137,7 +151,7 @@ Every state transition must go through flowr. Do not skip steps or guess transit 1. **State entry:** Run `python -m flowr check --session` to see current state, owner, skills, and available transitions (JSON output: parse `attrs.owner`, `attrs.skills`, `attrs.in`, `attrs.out`, `transitions`). Verify all `in` artifacts exist on disk. If any are missing, stop and flag rather than proceeding with assumed knowledge. Announce the state in one line, e.g. `→ specify-feature`. No preamble, no recap of how you got here. 2. **Dispatch to owner agent:** The state's `owner` field names the responsible agent. Call that agent as a subagent with the state's `skills` loaded, passing the state attrs as context. Owner mapping: `PO` → product-owner, `DE` → domain-expert, `SE` → software-engineer, `SA` → system-architect, `R` → reviewer, `Design Agent` → design-agent, `Setup Agent` → setup-agent. -3. **Do the work:** Load and execute the skill(s) listed in the state's `skills` field. Read `in` artifacts on demand. Write only to `out` artifacts. Commit changes to the branch indicated by the state's `git` attribute (`main` or `feature`). Never switch branches mid-state. +3. **Do the work:** Load and execute the skill(s) listed in the state's `skills` field. Read all `in` artifacts before starting work — they are mandatory context. Write only to `out` artifacts. Commit changes to the branch indicated by the state's `git` attribute (`main` or `feature`). Never switch branches mid-state. 4. **State exit:** The anchor item in the todo handles this (see [[workflow/todo-anchor-protocol#key-takeaways]]). ### Convention Boundary @@ -190,12 +204,17 @@ Before exiting a project-phase flow (discovery, architecture, branding, setup), Announce the state once at the top, then go quiet: - **Respect the artifact contract:** The state's attrs define what the owner agent may read and write: - - `in`: Read-only context. List what's available first, then read only what the task requires. No section specifications. - - `out`: May create or edit. Section sub-lists indicate which sections the state should produce or update. + - `in`: Mandatory context. All `in` artifacts must be read in full before starting work. For wildcard patterns (`*.md`), list the directory first, then read all discovered files. The `in` list defines what you *must* read — no skipping, no selective reading. + - `out`: May create or edit. Section sub-lists indicate which sections the state should produce or update. Follow the **out artifact protocol** (see below). - Files not in `out` must not be written to. If findings affect an artifact outside the output contract, flag them in output notes and defer the change to the step that owns that artifact. - The flow contract must always be followed unless the stakeholder explicitly asks to break it. - - **Artifact existence guarantee:** When a flow state needs a file artifact that does not yet exist, it is created from the matching template in `.templates/` (if one exists). If no template exists for a non-Python file referenced in `in`/`out`, raise an error for the stakeholder to decide. Files are then updated when a state writes to them or their sections. Environment artifacts (e.g., `coverage_reports`, `test_output`, `linter_output`) are produced by tooling rather than flow states. They exist on disk after running the relevant tool and are referenced in `in` but not in any state's `out`. -- **Read inputs on demand, not eagerly.** When `in` lists artifacts, discover what's available first (`ls`, `find`), then read only the files and sections needed for the current task. The `in` list defines what you *may* read, not what you *must* read up front. This applies to all files: spec documents, production code, and test code. List directories first, read selectively. Loading all `in` artifacts before starting wastes context and causes middle-position attention degradation (Liu et al., 2023). + - **Cumulative editing:** When a flow loops back to a state that was previously executed (e.g., `needs_reinterview` → `stakeholder-interview` → `domain-discovery`), the `out` artifact is **edited**, not recreated. The agent reads the existing file, incorporates new information, and adjusts existing content. This is especially important for `domain_model.md` and `glossary.md` which accumulate knowledge across multiple discovery iterations. +- **Out artifact protocol:** Before writing to any `out` artifact: + 1. Check if the file exists on disk. + 2. **If it exists** → read it, then edit only the sections declared in the flow's `out` section sub-lists. Preserve existing content outside those sections. + 3. **If it does not exist** → resolve the template path: take the destination path, prepend `.templates/`, append `.template` (e.g., `docs/spec/domain_model.md` → `.templates/docs/spec/domain_model.md.template`). Copy the template to the destination path, then edit the declared sections. Strip any template placeholders during editing. + 4. **If no template exists** for a non-Python file referenced in `in`/`out`, raise an error for the stakeholder to decide. + 5. **Environment artifacts** (e.g., `coverage_reports`, `test_output`, `linter_output`) are produced by tooling rather than flow states. They exist on disk after running the relevant tool and are referenced in `in` but not in any state's `out`. - **Specification documents are read-only during development.** During TDD and review cycles, the SE and reviewer may ONLY modify production code and test code. Spec document inconsistencies must be FLAGGED in output notes, not fixed directly. Spec docs are owned by other flow states and can only be changed through the appropriate flow step, after code is reviewed and approved. - **Flag issues with precise citations.** When flagging a problem during review or adversarial analysis, include file:line references (e.g., "domain_model.md:23 conflicts with login.feature:15"). Vague findings create rework. - **Do the work with the fewest, quietest commands.** Suppress verbose output. If a command can be scoped with a flag, pipe, or limit, use it. Don't dump full files or directory listings when a targeted query answers the question. From cf9dcfcdf6f4b79883beab14db230803e085fd8b Mon Sep 17 00:00:00 2001 From: nullhack Date: Fri, 8 May 2026 04:57:06 -0400 Subject: [PATCH 2/3] feat: consolidate spec docs, improve discovery flow, add feature derivation knowledge - Merge event_storming, context_map, technical_design into domain_model and product_definition templates - Add Events/Commands and Context Map sections to domain_model.md.template - Add Technology Stack and Dependencies sections to product_definition.md.template - Discovery flow: merge domain discovery into 3-skill state, add discover-rules to feature-discovery - Architecture flow: merge adr-draft into technical-design state - New knowledge: domain-modeling, feature-boundaries, rule-derivation - Enriched knowledge: event-storming, ubiquitous-language, feature-discovery - All skills: read all in-artifacts before starting work - Placeholder naming: pattern (feature_id, adr_id, session_id, pm_id, rule_id) - Delete document-dependencies.yaml (unused) --- AGENTS.md | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 0a0a0a40..e0481f79 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -77,17 +77,7 @@ Artifact names in `in` and `out` lists use these conventions: | `dir/*.ext` | Multiple documents of that type available in `in` | `interview-notes/*.md`, `adr/*.md` | | `conceptual_name` | A runtime artifact that passes between states within a flow | `typed_source_stubs`, `test_implementations` | -### Placeholder Naming Convention - -Placeholders in template filenames and flow artifact paths use the `` pattern where **type** identifies the document kind and **_id** signals snake_case formatting: - -| Placeholder | Document type | Format | Example value | -|---|---|---|---| -| `` | Feature file | snake_case | `domain_value_objects`, `data_export` | -| `` | Architectural decision record | snake_case | `protocol_adapters`, `read_write_split` | -| `` | Interview session | snake_case | `user_authentication`, `risk_requirements` | -| `` | Post-mortem | snake_case | `gherkin_removal`, `invest_artifact_pollution` | -| `` | BDD rule test | snake_case | `token_identity`, `order_lifecycle` | +Placeholders in template filenames and flow artifact paths use the `` pattern where **type** identifies the document kind and **_id** signals snake_case formatting. See template filenames for the canonical placeholder names. **File naming rule:** All filenames use **snake_case** (e.g., `domain_value_objects.feature`, `ADR_20260504_protocol_adapters.md`). **Doc folders** use kebab-case for multi-word names (e.g., `interview-notes/`, `post-mortem/`). **Python/test folders** use snake_case (e.g., `tests/features/`). From 07793517104108b64fcdf1b759c1beb31a5617ab Mon Sep 17 00:00:00 2001 From: nullhack Date: Fri, 8 May 2026 05:28:24 -0400 Subject: [PATCH 3/3] refactor: standardize flow identifiers to kebab-case Port kebab-case naming convention from cex-mm. All flow identifiers (transitions, conditions, evidence keys, runtime artifacts) now use hyphens instead of underscores. Also includes select-feature skill and AGENTS.md session param updates. --- .flowr/flows/architecture-flow.mermaid | 2 +- .flowr/flows/architecture-flow.yaml | 50 +++++------ .flowr/flows/branding-flow.mermaid | 2 +- .flowr/flows/branding-flow.yaml | 24 +++--- .flowr/flows/delivery-flow.mermaid | 2 +- .flowr/flows/delivery-flow.yaml | 24 +++--- .flowr/flows/development-flow.mermaid | 2 +- .flowr/flows/development-flow.yaml | 30 +++---- .flowr/flows/discovery-flow.mermaid | 2 +- .flowr/flows/discovery-flow.yaml | 18 ++-- .flowr/flows/feature-development-flow.mermaid | 2 +- .flowr/flows/feature-development-flow.yaml | 14 ++-- .flowr/flows/main-flow.mermaid | 2 +- .flowr/flows/main-flow.yaml | 4 +- .flowr/flows/planning-flow.mermaid | 2 +- .flowr/flows/planning-flow.yaml | 82 +++++++++---------- .flowr/flows/post-mortem-flow.mermaid | 2 +- .flowr/flows/post-mortem-flow.yaml | 16 ++-- .flowr/flows/review-gate-flow.mermaid | 2 +- .flowr/flows/review-gate-flow.yaml | 36 ++++---- .flowr/flows/setup-project-flow.mermaid | 2 +- .flowr/flows/setup-project-flow.yaml | 42 +++++----- .flowr/flows/tdd-cycle-flow.mermaid | 2 +- .flowr/flows/tdd-cycle-flow.yaml | 30 +++---- .../knowledge/workflow/flowr-operations.md | 2 +- .opencode/knowledge/workflow/flowr-spec.md | 6 +- .opencode/skills/select-feature/SKILL.md | 2 +- AGENTS.md | 16 ++-- 28 files changed, 210 insertions(+), 210 deletions(-) diff --git a/.flowr/flows/architecture-flow.mermaid b/.flowr/flows/architecture-flow.mermaid index 6adff076..2506f89f 100644 --- a/.flowr/flows/architecture-flow.mermaid +++ b/.flowr/flows/architecture-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"architecture-assessment\" as architecture-assessment\n state \"context-mapping\" as context-mapping\n state \"technical-design\" as technical-design\n state \"review-signoff\" as review-signoff\n architecture-assessment --> complete : no_architecture_needed | architecture_complete: ==verified\n architecture-assessment --> context-mapping : needs_context_update | domain_model_md: ==exists\n architecture-assessment --> technical-design : needs_technical_design | domain_model_md: ==exists\n architecture-assessment --> context-mapping : greenfield | domain_model_md: ==missing\n architecture-assessment --> needs_discovery : delivery_mismatch_unresolvable\n architecture-assessment --> needs_discovery : needs_discovery\n context-mapping --> technical-design : done\n context-mapping --> needs_discovery : needs_discovery\n technical-design --> review-signoff : done\n review-signoff --> complete : approved | alignment: ==domain_model_verified, adr_compliance: ==adrs_respected, committed_to_main_locally: ==verified\n review-signoff --> architecture-assessment : inconsistent\n review-signoff --> needs_discovery : needs_discovery" + "mermaid": "stateDiagram-v2\n state \"architecture-assessment\" as architecture-assessment\n state \"context-mapping\" as context-mapping\n state \"technical-design\" as technical-design\n state \"review-signoff\" as review-signoff\n architecture-assessment --> complete : no-architecture-needed | architecture-complete: ==verified\n architecture-assessment --> context-mapping : needs-context-update | domain-model-md: ==exists\n architecture-assessment --> technical-design : needs-technical-design | domain-model-md: ==exists\n architecture-assessment --> context-mapping : greenfield | domain-model-md: ==missing\n architecture-assessment --> needs-discovery : delivery-mismatch-unresolvable\n architecture-assessment --> needs-discovery : needs-discovery\n context-mapping --> technical-design : done\n context-mapping --> needs-discovery : needs-discovery\n technical-design --> review-signoff : done\n review-signoff --> complete : approved | alignment: ==domain-model-verified, adr-compliance: ==adrs-respected, committed-to-main-locally: ==verified\n review-signoff --> architecture-assessment : inconsistent\n review-signoff --> needs-discovery : needs-discovery" } diff --git a/.flowr/flows/architecture-flow.yaml b/.flowr/flows/architecture-flow.yaml index efb6cda5..28f5010a 100644 --- a/.flowr/flows/architecture-flow.yaml +++ b/.flowr/flows/architecture-flow.yaml @@ -2,7 +2,7 @@ flow: architecture-flow version: 8.0.0 exits: - complete - - needs_discovery + - needs-discovery states: - id: architecture-assessment @@ -20,27 +20,27 @@ states: - deployment - quality_attributes conditions: - architecture_complete: - architecture_complete: ==verified - architecture_exists: - domain_model_md: ==exists - no_architecture_exists: - domain_model_md: ==missing + architecture-complete: + architecture-complete: ==verified + architecture-exists: + domain-model-md: ==exists + no-architecture-exists: + domain-model-md: ==missing next: - no_architecture_needed: + no-architecture-needed: to: complete - when: architecture_complete - needs_context_update: + when: architecture-complete + needs-context-update: to: context-mapping - when: architecture_exists - needs_technical_design: + when: architecture-exists + needs-technical-design: to: technical-design - when: architecture_exists + when: architecture-exists greenfield: to: context-mapping - when: no_architecture_exists - delivery_mismatch_unresolvable: needs_discovery - needs_discovery: needs_discovery + when: no-architecture-exists + delivery-mismatch-unresolvable: needs-discovery + needs-discovery: needs-discovery - id: context-mapping attrs: @@ -59,7 +59,7 @@ states: - changes next: done: technical-design - needs_discovery: needs_discovery + needs-discovery: needs-discovery - id: technical-design attrs: @@ -95,16 +95,16 @@ states: - glossary.md out: [] conditions: - architecture_approved: - alignment: ==domain_model_verified - adr_compliance: ==adrs_respected - committed_to_main_locally: - committed_to_main_locally: ==verified + architecture-approved: + alignment: ==domain-model-verified + adr-compliance: ==adrs-respected + committed-to-main-locally: + committed-to-main-locally: ==verified next: approved: to: complete when: - - architecture_approved - - committed_to_main_locally + - architecture-approved + - committed-to-main-locally inconsistent: architecture-assessment - needs_discovery: needs_discovery + needs-discovery: needs-discovery diff --git a/.flowr/flows/branding-flow.mermaid b/.flowr/flows/branding-flow.mermaid index 1e320244..3c79b145 100644 --- a/.flowr/flows/branding-flow.mermaid +++ b/.flowr/flows/branding-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"setup-branding\" as setup-branding\n state \"design-colors\" as design-colors\n state \"design-assets\" as design-assets\n setup-branding --> design-colors : confirmed\n setup-branding --> cancelled : cancelled\n design-colors --> design-assets : approved\n design-colors --> design-colors : revise\n design-colors --> cancelled : cancelled\n design-assets --> branded : approved | logo_monochrome: ==passes, logo_scalability: ==passes, logo_blur_test: ==passes, committed_to_main_locally: ==verified\n design-assets --> design-assets : revise\n design-assets --> cancelled : cancelled" + "mermaid": "stateDiagram-v2\n state \"setup-branding\" as setup-branding\n state \"design-colors\" as design-colors\n state \"design-assets\" as design-assets\n setup-branding --> design-colors : confirmed\n setup-branding --> cancelled : cancelled\n design-colors --> design-assets : approved\n design-colors --> design-colors : revise\n design-colors --> cancelled : cancelled\n design-assets --> branded : approved | logo-monochrome: ==passes, logo-scalability: ==passes, logo-blur-test: ==passes, committed-to-main-locally: ==verified\n design-assets --> design-assets : revise\n design-assets --> cancelled : cancelled" } diff --git a/.flowr/flows/branding-flow.yaml b/.flowr/flows/branding-flow.yaml index 3324a814..e4b30ec5 100644 --- a/.flowr/flows/branding-flow.yaml +++ b/.flowr/flows/branding-flow.yaml @@ -52,21 +52,21 @@ states: - docs/assets/logo.svg - docs/assets/banner.svg conditions: - monochrome_passed: - logo_monochrome: ==passes - scalability_passed: - logo_scalability: ==passes - blur_passed: - logo_blur_test: ==passes - assets_committed: - committed_to_main_locally: ==verified + monochrome-passed: + logo-monochrome: ==passes + scalability-passed: + logo-scalability: ==passes + blur-passed: + logo-blur-test: ==passes + assets-committed: + committed-to-main-locally: ==verified next: approved: to: branded when: - - monochrome_passed - - scalability_passed - - blur_passed - - assets_committed + - monochrome-passed + - scalability-passed + - blur-passed + - assets-committed revise: design-assets cancelled: cancelled \ No newline at end of file diff --git a/.flowr/flows/delivery-flow.mermaid b/.flowr/flows/delivery-flow.mermaid index 5d69634c..1441c480 100644 --- a/.flowr/flows/delivery-flow.mermaid +++ b/.flowr/flows/delivery-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"acceptance\" as acceptance\n state \"local-merge\" as local-merge\n state \"publish-decision\" as publish-decision\n state \"pr-creation\" as pr-creation\n acceptance --> local-merge : approved | feature_status: ==ACCEPTED\n acceptance --> rejected : rejected\n local-merge --> publish-decision : merged\n local-merge --> needs_development : conflict\n publish-decision --> next-feature : accumulate\n publish-decision --> pr-creation : publish\n pr-creation --> next-feature : approved | ci_passes: ==verified, no_changes_requested: ==verified\n pr-creation --> needs_development : changes_requested\n pr-creation --> cancelled : cancelled" + "mermaid": "stateDiagram-v2\n state \"acceptance\" as acceptance\n state \"local-merge\" as local-merge\n state \"publish-decision\" as publish-decision\n state \"pr-creation\" as pr-creation\n acceptance --> local-merge : approved | feature-status: ==ACCEPTED\n acceptance --> rejected : rejected\n local-merge --> publish-decision : merged\n local-merge --> needs-development : conflict\n publish-decision --> next-feature : accumulate\n publish-decision --> pr-creation : publish\n pr-creation --> next-feature : approved | ci-passes: ==verified, no-changes-requested: ==verified\n pr-creation --> needs-development : changes-requested\n pr-creation --> cancelled : cancelled" } diff --git a/.flowr/flows/delivery-flow.yaml b/.flowr/flows/delivery-flow.yaml index cf1df3f5..01e91b5b 100644 --- a/.flowr/flows/delivery-flow.yaml +++ b/.flowr/flows/delivery-flow.yaml @@ -5,7 +5,7 @@ params: [feature_id] exits: - next-feature - rejected - - needs_development + - needs-development - cancelled states: @@ -25,12 +25,12 @@ states: - acceptance_evidence - approval_record conditions: - feature_accepted: - feature_status: ==ACCEPTED + feature-accepted: + feature-status: ==ACCEPTED next: approved: to: local-merge - when: feature_accepted + when: feature-accepted rejected: rejected - id: local-merge @@ -41,14 +41,14 @@ states: skills: - merge-local in: - - feature_commits + - feature-commits - approval_record - features/.feature out: - - merged_commits + - merged-commits next: merged: publish-decision - conflict: needs_development + conflict: needs-development - id: publish-decision attrs: @@ -58,7 +58,7 @@ states: skills: - decide-batch-action in: - - merged_commits + - merged-commits out: [] next: accumulate: next-feature @@ -72,16 +72,16 @@ states: skills: - create-pr in: - - merged_commits + - merged-commits - features/.feature out: [] conditions: merged: - ci_passes: ==verified - no_changes_requested: ==verified + ci-passes: ==verified + no-changes-requested: ==verified next: approved: to: next-feature when: merged - changes_requested: needs_development + changes-requested: needs-development cancelled: cancelled \ No newline at end of file diff --git a/.flowr/flows/development-flow.mermaid b/.flowr/flows/development-flow.mermaid index e07c02f7..3c4bea8d 100644 --- a/.flowr/flows/development-flow.mermaid +++ b/.flowr/flows/development-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"project-structuring\" as project-structuring\n tdd-cycle --> tdd-cycle-flow\n note right of tdd-cycle: invokes tdd-cycle-flow\n state \"tdd-cycle\" as tdd-cycle\n review-gate --> review-gate-flow\n note right of review-gate: invokes review-gate-flow\n state \"review-gate\" as review-gate\n state \"commit\" as commit\n project-structuring --> tdd-cycle : ready\n project-structuring --> needs_planning : needs_planning\n tdd-cycle --> review-gate : all_green | yagni: ==no_premature_abstractions, kiss: ==simplest_solution, dry: ==no_duplicated_logic, objcal: ==calisthenics_followed, smells: ==all_smells_addressed, solid: ==principles_applied, patterns: ==patterns_justified\n tdd-cycle --> project-structuring : blocked\n review-gate --> commit : pass\n review-gate --> tdd-cycle : fail\n commit --> done : done" + "mermaid": "stateDiagram-v2\n state \"project-structuring\" as project-structuring\n tdd-cycle --> tdd-cycle-flow\n note right of tdd-cycle: invokes tdd-cycle-flow\n state \"tdd-cycle\" as tdd-cycle\n review-gate --> review-gate-flow\n note right of review-gate: invokes review-gate-flow\n state \"review-gate\" as review-gate\n state \"commit\" as commit\n project-structuring --> tdd-cycle : ready\n project-structuring --> needs-planning : needs-planning\n tdd-cycle --> review-gate : all-green | yagni: ==no-premature-abstractions, kiss: ==simplest-solution, dry: ==no-duplicated-logic, objcal: ==calisthenics-followed, smells: ==all-smells-addressed, solid: ==principles-applied, patterns: ==patterns-justified\n tdd-cycle --> project-structuring : blocked\n review-gate --> commit : pass\n review-gate --> tdd-cycle : fail\n commit --> done : done" } diff --git a/.flowr/flows/development-flow.yaml b/.flowr/flows/development-flow.yaml index 251c2f1b..9191f518 100644 --- a/.flowr/flows/development-flow.yaml +++ b/.flowr/flows/development-flow.yaml @@ -3,7 +3,7 @@ version: 8.0.0 params: [feature_id] exits: - done - - needs_planning + - needs-planning states: - id: project-structuring @@ -22,7 +22,7 @@ states: - git_branch next: ready: tdd-cycle - needs_planning: needs_planning + needs-planning: needs-planning - id: tdd-cycle attrs: @@ -31,18 +31,18 @@ states: flow: tdd-cycle-flow flow-version: "^4" conditions: - design_declared: - yagni: ==no_premature_abstractions - kiss: ==simplest_solution - dry: ==no_duplicated_logic - objcal: ==calisthenics_followed - smells: ==all_smells_addressed - solid: ==principles_applied - patterns: ==patterns_justified + design-declared: + yagni: ==no-premature-abstractions + kiss: ==simplest-solution + dry: ==no-duplicated-logic + objcal: ==calisthenics-followed + smells: ==all-smells-addressed + solid: ==principles-applied + patterns: ==patterns-justified next: - all_green: + all-green: to: review-gate - when: design_declared + when: design-declared blocked: project-structuring - id: review-gate @@ -63,13 +63,13 @@ states: skills: - commit-implementation in: - - test_implementations - - source_implementations + - test-implementations + - source-implementations - design_review_evidence - structure_review_evidence - conventions_review_evidence - features/.feature out: - - feature_commits + - feature-commits next: done: done \ No newline at end of file diff --git a/.flowr/flows/discovery-flow.mermaid b/.flowr/flows/discovery-flow.mermaid index 32dde4a8..cffb7b17 100644 --- a/.flowr/flows/discovery-flow.mermaid +++ b/.flowr/flows/discovery-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"stakeholder-interview\" as stakeholder-interview\n state \"domain-discovery\" as domain-discovery\n state \"scope-boundary\" as scope-boundary\n state \"feature-discovery\" as feature-discovery\n stakeholder-interview --> domain-discovery : needs_full_discovery\n stakeholder-interview --> scope-boundary : needs_scope_only\n stakeholder-interview --> complete : already_known\n domain-discovery --> scope-boundary : done\n domain-discovery --> stakeholder-interview : needs_reinterview\n scope-boundary --> feature-discovery : done\n scope-boundary --> stakeholder-interview : needs_reinterview\n feature-discovery --> complete : done | committed_to_main_locally: ==verified\n feature-discovery --> stakeholder-interview : needs_reinterview" + "mermaid": "stateDiagram-v2\n state \"stakeholder-interview\" as stakeholder-interview\n state \"domain-discovery\" as domain-discovery\n state \"scope-boundary\" as scope-boundary\n state \"feature-discovery\" as feature-discovery\n stakeholder-interview --> domain-discovery : needs-full-discovery\n stakeholder-interview --> scope-boundary : needs-scope-only\n stakeholder-interview --> complete : already-known\n domain-discovery --> scope-boundary : done\n domain-discovery --> stakeholder-interview : needs-reinterview\n scope-boundary --> feature-discovery : done\n scope-boundary --> stakeholder-interview : needs-reinterview\n feature-discovery --> complete : done | committed-to-main-locally: ==verified\n feature-discovery --> stakeholder-interview : needs-reinterview" } diff --git a/.flowr/flows/discovery-flow.yaml b/.flowr/flows/discovery-flow.yaml index 7087e3ed..619ac6a9 100644 --- a/.flowr/flows/discovery-flow.yaml +++ b/.flowr/flows/discovery-flow.yaml @@ -18,9 +18,9 @@ states: out: - "interview-notes/.md" next: - needs_full_discovery: domain-discovery - needs_scope_only: scope-boundary - already_known: complete + needs-full-discovery: domain-discovery + needs-scope-only: scope-boundary + already-known: complete - id: domain-discovery attrs: @@ -48,7 +48,7 @@ states: - glossary.md next: done: scope-boundary - needs_reinterview: stakeholder-interview + needs-reinterview: stakeholder-interview - id: scope-boundary attrs: @@ -72,7 +72,7 @@ states: - deployment next: done: feature-discovery - needs_reinterview: stakeholder-interview + needs-reinterview: stakeholder-interview - id: feature-discovery attrs: @@ -94,10 +94,10 @@ states: - rules_business - constraints conditions: - committed_to_main_locally: - committed_to_main_locally: ==verified + committed-to-main-locally: + committed-to-main-locally: ==verified next: done: to: complete - when: committed_to_main_locally - needs_reinterview: stakeholder-interview + when: committed-to-main-locally + needs-reinterview: stakeholder-interview diff --git a/.flowr/flows/feature-development-flow.mermaid b/.flowr/flows/feature-development-flow.mermaid index 0c8dc9b3..064399e0 100644 --- a/.flowr/flows/feature-development-flow.mermaid +++ b/.flowr/flows/feature-development-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n planning --> planning-flow\n note right of planning: invokes planning-flow\n state \"planning\" as planning\n development --> development-flow\n note right of development: invokes development-flow\n state \"development\" as development\n delivery --> delivery-flow\n note right of delivery: invokes delivery-flow\n state \"delivery\" as delivery\n post-mortem --> post-mortem-flow\n note right of post-mortem: invokes post-mortem-flow\n state \"post-mortem\" as post-mortem\n planning --> development : complete\n planning --> needs_architecture : needs_architecture\n planning --> completed : no_features\n development --> delivery : done\n development --> planning : needs_planning\n delivery --> planning : next-feature\n delivery --> post-mortem : rejected\n delivery --> development : needs_development\n delivery --> cancelled : cancelled\n post-mortem --> planning : complete\n post-mortem --> needs_architecture : needs_architecture\n post-mortem --> cancelled : no_action" + "mermaid": "stateDiagram-v2\n planning --> planning-flow\n note right of planning: invokes planning-flow\n state \"planning\" as planning\n development --> development-flow\n note right of development: invokes development-flow\n state \"development\" as development\n delivery --> delivery-flow\n note right of delivery: invokes delivery-flow\n state \"delivery\" as delivery\n post-mortem --> post-mortem-flow\n note right of post-mortem: invokes post-mortem-flow\n state \"post-mortem\" as post-mortem\n planning --> development : complete\n planning --> needs-architecture : needs-architecture\n planning --> completed : no-features\n development --> delivery : done\n development --> planning : needs-planning\n delivery --> planning : next-feature\n delivery --> post-mortem : rejected\n delivery --> development : needs-development\n delivery --> cancelled : cancelled\n post-mortem --> planning : complete\n post-mortem --> needs-architecture : needs-architecture\n post-mortem --> cancelled : no-action" } diff --git a/.flowr/flows/feature-development-flow.yaml b/.flowr/flows/feature-development-flow.yaml index c0013bdc..4c21e267 100644 --- a/.flowr/flows/feature-development-flow.yaml +++ b/.flowr/flows/feature-development-flow.yaml @@ -3,7 +3,7 @@ version: 7.0.0 params: [feature_id] exits: - - needs_architecture + - needs-architecture - cancelled - completed @@ -16,8 +16,8 @@ states: flow-version: "^7" next: complete: development - needs_architecture: needs_architecture - no_features: completed + needs-architecture: needs-architecture + no-features: completed - id: development attrs: @@ -27,7 +27,7 @@ states: flow-version: "^6" next: done: delivery - needs_planning: planning + needs-planning: planning - id: delivery attrs: @@ -38,7 +38,7 @@ states: next: next-feature: planning rejected: post-mortem - needs_development: development + needs-development: development cancelled: cancelled - id: post-mortem @@ -49,5 +49,5 @@ states: flow-version: "^3" next: complete: planning - needs_architecture: needs_architecture - no_action: cancelled \ No newline at end of file + needs-architecture: needs-architecture + no-action: cancelled \ No newline at end of file diff --git a/.flowr/flows/main-flow.mermaid b/.flowr/flows/main-flow.mermaid index 6bcb0b8c..92feb646 100644 --- a/.flowr/flows/main-flow.mermaid +++ b/.flowr/flows/main-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n discovery --> discovery-flow\n note right of discovery: invokes discovery-flow\n state \"discovery\" as discovery\n architecture --> architecture-flow\n note right of architecture: invokes architecture-flow\n state \"architecture\" as architecture\n feature-development --> feature-development-flow\n note right of feature-development: invokes feature-development-flow\n state \"feature-development\" as feature-development\n discovery --> architecture : complete\n architecture --> feature-development : complete\n architecture --> discovery : needs_discovery\n feature-development --> architecture : needs_architecture\n feature-development --> cancelled : cancelled\n feature-development --> completed : completed" + "mermaid": "stateDiagram-v2\n discovery --> discovery-flow\n note right of discovery: invokes discovery-flow\n state \"discovery\" as discovery\n architecture --> architecture-flow\n note right of architecture: invokes architecture-flow\n state \"architecture\" as architecture\n feature-development --> feature-development-flow\n note right of feature-development: invokes feature-development-flow\n state \"feature-development\" as feature-development\n discovery --> architecture : complete\n architecture --> feature-development : complete\n architecture --> discovery : needs-discovery\n feature-development --> architecture : needs-architecture\n feature-development --> cancelled : cancelled\n feature-development --> completed : completed" } diff --git a/.flowr/flows/main-flow.yaml b/.flowr/flows/main-flow.yaml index d7d8fe76..adf303f1 100644 --- a/.flowr/flows/main-flow.yaml +++ b/.flowr/flows/main-flow.yaml @@ -20,7 +20,7 @@ states: flow-version: "^6" next: complete: feature-development - needs_discovery: discovery + needs-discovery: discovery - id: feature-development attrs: @@ -29,6 +29,6 @@ states: flow: feature-development-flow flow-version: "^7" next: - needs_architecture: architecture + needs-architecture: architecture cancelled: cancelled completed: completed \ No newline at end of file diff --git a/.flowr/flows/planning-flow.mermaid b/.flowr/flows/planning-flow.mermaid index 2be9482a..73684302 100644 --- a/.flowr/flows/planning-flow.mermaid +++ b/.flowr/flows/planning-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"feature-selection\" as feature-selection\n state \"feature-breakdown\" as feature-breakdown\n state \"feature-examples\" as feature-examples\n state \"create-py-stubs\" as create-py-stubs\n state \"definition-of-done\" as definition-of-done\n state \"ready\" as ready\n feature-selection --> feature-breakdown : selected\n feature-selection --> needs_architecture : needs_architecture\n feature-selection --> no_features : no_features\n feature-breakdown --> feature-examples : done | independent: ==no_shared_data_or_side_effects, negotiable: ==scope_negotiated, valuable: ==user_value_clear, estimable: ==effort_estimated, small: ==fits_single_sprint, testable: ==acceptance_criteria_defined\n feature-breakdown --> feature-breakdown : needs_respecification\n feature-examples --> create-py-stubs : done | all_examples_have_ids: ==verified, all_examples_have_gherkin: ==verified, premortem_done: ==verified, concerns: <=2, must_examples: <=8, all_examples_observable: ==each_then_describes_single_outcome, all_examples_declarative: ==behaviour_not_ui_steps, distinctness_verified: ==no_duplicate_observable_behaviours\n feature-examples --> feature-breakdown : needs_respecification\n create-py-stubs --> definition-of-done : done\n definition-of-done --> ready : done\n ready --> complete : done | feature_status: ==BASELINED, committed_to_main_locally: ==verified" + "mermaid": "stateDiagram-v2\n state \"feature-selection\" as feature-selection\n state \"feature-breakdown\" as feature-breakdown\n state \"feature-examples\" as feature-examples\n state \"create-py-stubs\" as create-py-stubs\n state \"definition-of-done\" as definition-of-done\n state \"ready\" as ready\n feature-selection --> feature-breakdown : selected\n feature-selection --> needs-architecture : needs-architecture\n feature-selection --> no-features : no-features\n feature-breakdown --> feature-examples : done | independent: ==no-shared-data-or-side-effects, negotiable: ==scope-negotiated, valuable: ==user-value-clear, estimable: ==effort-estimated, small: ==fits-single-sprint, testable: ==acceptance-criteria-defined\n feature-breakdown --> feature-breakdown : needs-respecification\n feature-examples --> create-py-stubs : done | all-examples-have-ids: ==verified, all-examples-have-gherkin: ==verified, premortem-done: ==verified, concerns: <=2, must-examples: <=8, all-examples-observable: ==each-then-describes-single-outcome, all-examples-declarative: ==behaviour-not-ui-steps, distinctness-verified: ==no-duplicate-observable-behaviours\n feature-examples --> feature-breakdown : needs-respecification\n create-py-stubs --> definition-of-done : done\n definition-of-done --> ready : done\n ready --> complete : done | feature-status: ==BASELINED, committed-to-main-locally: ==verified" } diff --git a/.flowr/flows/planning-flow.yaml b/.flowr/flows/planning-flow.yaml index b0669870..e956196e 100644 --- a/.flowr/flows/planning-flow.yaml +++ b/.flowr/flows/planning-flow.yaml @@ -3,8 +3,8 @@ version: 9.0.0 params: [feature_id] exits: - complete - - needs_architecture - - no_features + - needs-architecture + - no-features states: - id: feature-selection @@ -21,8 +21,8 @@ states: out: [] next: selected: feature-breakdown - needs_architecture: needs_architecture - no_features: no_features + needs-architecture: needs-architecture + no-features: no-features - id: feature-breakdown attrs: @@ -40,18 +40,18 @@ states: - features/.feature: - rules conditions: - invest_passed: - independent: ==no_shared_data_or_side_effects - negotiable: ==scope_negotiated - valuable: ==user_value_clear - estimable: ==effort_estimated - small: ==fits_single_sprint - testable: ==acceptance_criteria_defined + invest-passed: + independent: ==no-shared-data-or-side-effects + negotiable: ==scope-negotiated + valuable: ==user-value-clear + estimable: ==effort-estimated + small: ==fits-single-sprint + testable: ==acceptance-criteria-defined next: done: to: feature-examples - when: invest_passed - needs_respecification: feature-breakdown + when: invest-passed + needs-respecification: feature-breakdown - id: feature-examples attrs: @@ -69,29 +69,29 @@ states: - features/.feature: - examples conditions: - examples_have_ids: - all_examples_have_ids: ==verified - examples_have_gherkin: - all_examples_have_gherkin: ==verified - premortem_done: - premortem_done: ==verified - decomposition_valid: + examples-have-ids: + all-examples-have-ids: ==verified + examples-have-gherkin: + all-examples-have-gherkin: ==verified + premortem-done: + premortem-done: ==verified + decomposition-valid: concerns: <=2 - must_examples: <=8 - examples_complete: - all_examples_have_ids: ==verified - all_examples_have_gherkin: ==verified - premortem_done: ==verified + must-examples: <=8 + examples-complete: + all-examples-have-ids: ==verified + all-examples-have-gherkin: ==verified + premortem-done: ==verified concerns: <=2 - must_examples: <=8 - all_examples_observable: ==each_then_describes_single_outcome - all_examples_declarative: ==behaviour_not_ui_steps - distinctness_verified: ==no_duplicate_observable_behaviours + must-examples: <=8 + all-examples-observable: ==each-then-describes-single-outcome + all-examples-declarative: ==behaviour-not-ui-steps + distinctness-verified: ==no-duplicate-observable-behaviours next: done: to: create-py-stubs - when: examples_complete - needs_respecification: feature-breakdown + when: examples-complete + needs-respecification: feature-breakdown - id: create-py-stubs attrs: @@ -105,11 +105,11 @@ states: - domain_model.md - glossary.md out: - - typed_source_stubs - - test_skeletons + - typed-source-stubs + - test-skeletons conditions: - stubs_traceable: - all_ids_have_stubs: ==verified + stubs-traceable: + all-ids-have-stubs: ==verified next: done: definition-of-done @@ -142,13 +142,13 @@ states: - domain_model.md out: [] conditions: - feature_baselined: - feature_status: ==BASELINED - committed_to_main_locally: - committed_to_main_locally: ==verified + feature-baselined: + feature-status: ==BASELINED + committed-to-main-locally: + committed-to-main-locally: ==verified next: done: to: complete when: - - feature_baselined - - committed_to_main_locally + - feature-baselined + - committed-to-main-locally diff --git a/.flowr/flows/post-mortem-flow.mermaid b/.flowr/flows/post-mortem-flow.mermaid index 438fcfeb..0f296f33 100644 --- a/.flowr/flows/post-mortem-flow.mermaid +++ b/.flowr/flows/post-mortem-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"root-cause-analysis\" as root-cause-analysis\n state \"document-findings\" as document-findings\n state \"extract-lessons\" as extract-lessons\n state \"action-items\" as action-items\n root-cause-analysis --> document-findings : issues_found\n root-cause-analysis --> no_action : no_issues_found\n document-findings --> extract-lessons : done\n extract-lessons --> action-items : done\n action-items --> complete : replan\n action-items --> needs_architecture : architecture_issue\n action-items --> no_action : abandon" + "mermaid": "stateDiagram-v2\n state \"root-cause-analysis\" as root-cause-analysis\n state \"document-findings\" as document-findings\n state \"extract-lessons\" as extract-lessons\n state \"action-items\" as action-items\n root-cause-analysis --> document-findings : issues-found\n root-cause-analysis --> no-action : no-issues-found\n document-findings --> extract-lessons : done\n extract-lessons --> action-items : done\n action-items --> complete : replan\n action-items --> needs-architecture : architecture-issue\n action-items --> no-action : abandon" } diff --git a/.flowr/flows/post-mortem-flow.yaml b/.flowr/flows/post-mortem-flow.yaml index 4b2e4275..c67629c3 100644 --- a/.flowr/flows/post-mortem-flow.yaml +++ b/.flowr/flows/post-mortem-flow.yaml @@ -2,8 +2,8 @@ flow: post-mortem-flow version: 3.0.0 exits: - complete - - needs_architecture - - no_action + - needs-architecture + - no-action states: - id: root-cause-analysis @@ -15,10 +15,10 @@ states: - analyze-root-cause in: [] out: - - root_cause_analysis + - root-cause-analysis next: - issues_found: document-findings - no_issues_found: no_action + issues-found: document-findings + no-issues-found: no-action - id: document-findings attrs: @@ -28,7 +28,7 @@ states: skills: - document-post-mortem in: - - root_cause_analysis + - root-cause-analysis out: - post-mortem/PM_YYYYMMDD_.md: - failed_at @@ -66,5 +66,5 @@ states: - restart_check next: replan: complete - architecture_issue: needs_architecture - abandon: no_action \ No newline at end of file + architecture-issue: needs-architecture + abandon: no-action \ No newline at end of file diff --git a/.flowr/flows/review-gate-flow.mermaid b/.flowr/flows/review-gate-flow.mermaid index e5bcace7..374c8123 100644 --- a/.flowr/flows/review-gate-flow.mermaid +++ b/.flowr/flows/review-gate-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"design-review\" as design-review\n state \"structure-review\" as structure-review\n state \"conventions-review\" as conventions-review\n design-review --> structure-review : pass | alignment: ==domain_model_verified, adr_compliance: ==adrs_respected\n design-review --> fail : fail\n structure-review --> conventions-review : pass | coverage: ==threshold_met, traceability: ==all_ids_covered, coupling: ==behavior_not_implementation\n structure-review --> fail : fail\n conventions-review --> pass : pass | formatting: ==clean, naming: ==domain_language\n conventions-review --> fail : fail" + "mermaid": "stateDiagram-v2\n state \"design-review\" as design-review\n state \"structure-review\" as structure-review\n state \"conventions-review\" as conventions-review\n design-review --> structure-review : pass | alignment: ==domain-model-verified, adr-compliance: ==adrs-respected\n design-review --> fail : fail\n structure-review --> conventions-review : pass | coverage: ==threshold-met, traceability: ==all-ids-covered, coupling: ==behavior-not-implementation\n structure-review --> fail : fail\n conventions-review --> pass : pass | formatting: ==clean, naming: ==domain-language\n conventions-review --> fail : fail" } diff --git a/.flowr/flows/review-gate-flow.yaml b/.flowr/flows/review-gate-flow.yaml index 2b13e341..3f7c7b8e 100644 --- a/.flowr/flows/review-gate-flow.yaml +++ b/.flowr/flows/review-gate-flow.yaml @@ -18,17 +18,17 @@ states: - domain_model.md - glossary.md - product_definition.md - - refactored_source + - refactored-source out: - design_review_evidence conditions: - design_approved: - alignment: ==domain_model_verified - adr_compliance: ==adrs_respected + design-approved: + alignment: ==domain-model-verified + adr-compliance: ==adrs-respected next: pass: to: structure-review - when: design_approved + when: design-approved fail: fail - id: structure-review @@ -40,23 +40,23 @@ states: - review-structure - verify-traceability in: - - coverage_reports - - test_output - - refactored_source + - coverage-reports + - test-output + - refactored-source - features/.feature - domain_model.md - glossary.md out: - structure_review_evidence conditions: - structure_approved: - coverage: ==threshold_met - traceability: ==all_ids_covered - coupling: ==behavior_not_implementation + structure-approved: + coverage: ==threshold-met + traceability: ==all-ids-covered + coupling: ==behavior-not-implementation next: pass: to: conventions-review - when: structure_approved + when: structure-approved fail: fail - id: conventions-review @@ -67,18 +67,18 @@ states: skills: - review-conventions in: - - linter_output - - refactored_source + - linter-output + - refactored-source - product_definition.md - glossary.md out: - conventions_review_evidence conditions: - conventions_approved: + conventions-approved: formatting: ==clean - naming: ==domain_language + naming: ==domain-language next: pass: to: pass - when: conventions_approved + when: conventions-approved fail: fail diff --git a/.flowr/flows/setup-project-flow.mermaid b/.flowr/flows/setup-project-flow.mermaid index d2434bf5..e8ef7183 100644 --- a/.flowr/flows/setup-project-flow.mermaid +++ b/.flowr/flows/setup-project-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"assess-requirements\" as assess-requirements\n state \"configure-parameters\" as configure-parameters\n state \"apply-substitutions\" as apply-substitutions\n state \"verify-and-finalize\" as verify-and-finalize\n assess-requirements --> configure-parameters : assessed\n assess-requirements --> cancelled : cancelled\n configure-parameters --> apply-substitutions : confirmed | pyproject_toml: ==exists, readme_md: ==exists, github_workflows_ci_yml: ==exists, license: ==exists, tests_unit_main_test_py: ==exists, app_directory: ==exists\n configure-parameters --> cancelled : missing_files\n apply-substitutions --> verify-and-finalize : applied | no_stale_app_imports: ==verified, package_renamed: ==verified, version_reset: ==verified\n apply-substitutions --> cancelled : failed\n verify-and-finalize --> initialized : initialized | tests_pass: ==verified, imports_valid: ==verified, artifacts_cleaned: ==verified, committed_to_main_locally: ==verified\n verify-and-finalize --> cancelled : failed" + "mermaid": "stateDiagram-v2\n state \"assess-requirements\" as assess-requirements\n state \"configure-parameters\" as configure-parameters\n state \"apply-substitutions\" as apply-substitutions\n state \"verify-and-finalize\" as verify-and-finalize\n assess-requirements --> configure-parameters : assessed\n assess-requirements --> cancelled : cancelled\n configure-parameters --> apply-substitutions : confirmed | pyproject-toml: ==exists, readme-md: ==exists, github-workflows-ci-yml: ==exists, license: ==exists, tests-unit-main-test-py: ==exists, app-directory: ==exists\n configure-parameters --> cancelled : missing-files\n apply-substitutions --> verify-and-finalize : applied | no-stale-app-imports: ==verified, package-renamed: ==verified, version-reset: ==verified\n apply-substitutions --> cancelled : failed\n verify-and-finalize --> initialized : initialized | tests-pass: ==verified, imports-valid: ==verified, artifacts-cleaned: ==verified, committed-to-main-locally: ==verified\n verify-and-finalize --> cancelled : failed" } diff --git a/.flowr/flows/setup-project-flow.yaml b/.flowr/flows/setup-project-flow.yaml index ae5db2bf..d0d29a13 100644 --- a/.flowr/flows/setup-project-flow.yaml +++ b/.flowr/flows/setup-project-flow.yaml @@ -29,18 +29,18 @@ states: out: - template-config.yaml conditions: - template_files_exist: - pyproject_toml: ==exists - readme_md: ==exists - github_workflows_ci_yml: ==exists + template-files-exist: + pyproject-toml: ==exists + readme-md: ==exists + github-workflows-ci-yml: ==exists license: ==exists - tests_unit_main_test_py: ==exists - app_directory: ==exists + tests-unit-main-test-py: ==exists + app-directory: ==exists next: confirmed: to: apply-substitutions - when: template_files_exist - missing_files: cancelled + when: template-files-exist + missing-files: cancelled - id: apply-substitutions attrs: @@ -60,14 +60,14 @@ states: - template-config.yaml - package_directory conditions: - substitutions_successful: - no_stale_app_imports: ==verified - package_renamed: ==verified - version_reset: ==verified + substitutions-successful: + no-stale-app-imports: ==verified + package-renamed: ==verified + version-reset: ==verified next: applied: to: verify-and-finalize - when: substitutions_successful + when: substitutions-successful failed: cancelled - id: verify-and-finalize @@ -82,16 +82,16 @@ states: out: - git_remote conditions: - verification_passed: - tests_pass: ==verified - imports_valid: ==verified - artifacts_cleaned: ==verified - committed_to_main_locally: - committed_to_main_locally: ==verified + verification-passed: + tests-pass: ==verified + imports-valid: ==verified + artifacts-cleaned: ==verified + committed-to-main-locally: + committed-to-main-locally: ==verified next: initialized: to: initialized when: - - verification_passed - - committed_to_main_locally + - verification-passed + - committed-to-main-locally failed: cancelled \ No newline at end of file diff --git a/.flowr/flows/tdd-cycle-flow.mermaid b/.flowr/flows/tdd-cycle-flow.mermaid index 268a83e4..f010650c 100644 --- a/.flowr/flows/tdd-cycle-flow.mermaid +++ b/.flowr/flows/tdd-cycle-flow.mermaid @@ -1,3 +1,3 @@ { - "mermaid": "stateDiagram-v2\n state \"red\" as red\n state \"green\" as green\n state \"refactor\" as refactor\n red --> green : test_written\n red --> blocked : blocked\n green --> refactor : test_passes\n refactor --> red : next_example\n refactor --> all_green : all_examples_pass" + "mermaid": "stateDiagram-v2\n state \"red\" as red\n state \"green\" as green\n state \"refactor\" as refactor\n red --> green : test-written\n red --> blocked : blocked\n green --> refactor : test-passes\n refactor --> red : next-example\n refactor --> all-green : all-examples-pass" } diff --git a/.flowr/flows/tdd-cycle-flow.yaml b/.flowr/flows/tdd-cycle-flow.yaml index f5e60574..1cc47a9f 100644 --- a/.flowr/flows/tdd-cycle-flow.yaml +++ b/.flowr/flows/tdd-cycle-flow.yaml @@ -2,7 +2,7 @@ flow: tdd-cycle-flow version: 4.0.0 params: [feature_id] exits: - - all_green + - all-green - blocked states: @@ -14,15 +14,15 @@ states: skills: - write-test in: - - test_skeletons - - typed_source_stubs + - test-skeletons + - typed-source-stubs - features/.feature - domain_model.md - glossary.md out: - - test_implementations + - test-implementations next: - test_written: green + test-written: green blocked: blocked - id: green @@ -33,15 +33,15 @@ states: skills: - implement-minimum in: - - test_implementations - - typed_source_stubs + - test-implementations + - typed-source-stubs - features/.feature - domain_model.md - glossary.md out: - - source_implementations + - source-implementations next: - test_passes: refactor + test-passes: refactor - id: refactor attrs: @@ -51,14 +51,14 @@ states: skills: - refactor in: - - source_implementations - - test_implementations + - source-implementations + - test-implementations - features/.feature - domain_model.md - glossary.md out: - - source_implementations - - refactored_source + - source-implementations + - refactored-source next: - next_example: red - all_examples_pass: all_green \ No newline at end of file + next-example: red + all-examples-pass: all-green \ No newline at end of file diff --git a/.opencode/knowledge/workflow/flowr-operations.md b/.opencode/knowledge/workflow/flowr-operations.md index 9f2d1323..c3d792ec 100644 --- a/.opencode/knowledge/workflow/flowr-operations.md +++ b/.opencode/knowledge/workflow/flowr-operations.md @@ -25,7 +25,7 @@ last-updated: 2026-05-06 **Enhanced `next` Output**: The `next` command shows **all** transitions (open and blocked) with status markers. Each transition has `trigger`, `target`, `status` (`"open"` or `"blocked"`), and `conditions` (null if unguarded, or a dict of condition expressions). This lets you identify what evidence is needed to unblock guarded transitions. -**Evidence**: Some transitions are guarded by conditions (e.g., `feature_accepted: ==ACCEPTED`, `all_ids_have_stubs: ==true`). Set evidence with `--evidence key=value` or `--evidence-json '{"key":"value"}'` when advancing. If a transition is guarded and evidence is not set, the transition will fail. +**Evidence**: Some transitions are guarded by conditions (e.g., `feature-accepted: ==ACCEPTED`, `all-ids-have-stubs: ==true`). Set evidence with `--evidence key=value` or `--evidence-json '{"key":"value"}'` when advancing. If a transition is guarded and evidence is not set, the transition will fail. **Choosing a Path**: After completing work, use `next` with your evidence. Transitions with `"status": "open"` are available; `"status": "blocked"` transitions show which conditions need evidence. Choose the path that matches your work outcome. diff --git a/.opencode/knowledge/workflow/flowr-spec.md b/.opencode/knowledge/workflow/flowr-spec.md index 635355a4..87c56d2d 100644 --- a/.opencode/knowledge/workflow/flowr-spec.md +++ b/.opencode/knowledge/workflow/flowr-spec.md @@ -102,18 +102,18 @@ States may define a `conditions` block (sibling of `attrs` and `next`) containin ```yaml conditions: - invest_passed: + invest-passed: independent: ==true negotiable: ==true valuable: ==true next: done: to: next-state - when: invest_passed + when: invest-passed partial: to: review when: - - invest_passed + - invest-passed - { override: "==yes" } ``` diff --git a/.opencode/skills/select-feature/SKILL.md b/.opencode/skills/select-feature/SKILL.md index 4a1623a0..37b9e969 100644 --- a/.opencode/skills/select-feature/SKILL.md +++ b/.opencode/skills/select-feature/SKILL.md @@ -15,4 +15,4 @@ Available knowledge: [[requirements/wsjf]]. `in` artifacts: read all before star - Select the first feature by delivery order from `product_definition.md`. The delivery order was established during discovery and already reflects business priority and technical dependencies. - Skip WSJF scoring: there's nothing to compare against. 6. IF features have `Status: BASELINED` (subsequent runs) → score per [[requirements/wsjf]] and select the highest WSJF score among Dependency=0 candidates. -7. Set the `feature_name` session param to the selected feature's filename stem (without `.feature` extension). +7. Set the `feature-id` session param to the selected feature's filename stem (without `.feature` extension). diff --git a/AGENTS.md b/AGENTS.md index e0481f79..f9aa62b8 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -8,7 +8,7 @@ Post-mortem analysis shows these practices prevent most project failures. Violat 4. **Never decompose a feature without stakeholder approval.** If a feature is too large for INVEST, propose the split to the stakeholder with rationale. They decide what's core vs. deferred. 5. **Verify inputs exist before entering a state.** Every state's `in` artifacts must be readable on disk. If they're missing, stop and reconstruct them. Don't proceed with assumed knowledge. 6. **A feature is not done until every interview requirement is traced.** Every stakeholder Q&A must map to either a passing @id test or an explicit stakeholder deferral. Untraced requirements = incomplete delivery. -7. **Respect git branch discipline.** Every state declares `git: main` or `git: feature` in its attrs. Work on `main` when the state says `main`, work on the feature branch when it says `feature`. Never switch branches mid-state. Before exiting a project-phase flow (discovery, architecture, branding, setup), set `committed_to_main_locally: ==verified` evidence. Changes must be committed to main before advancing. +7. **Respect git branch discipline.** Every state declares `git: main` or `git: feature` in its attrs. Work on `main` when the state says `main`, work on the feature branch when it says `feature`. Never switch branches mid-state. Before exiting a project-phase flow (discovery, architecture, branding, setup), set `committed-to-main-locally: ==verified` evidence. Changes must be committed to main before advancing. ## Project Structure - `.flowr/flows/`: YAML state machine definitions (source of truth for routing) @@ -75,7 +75,7 @@ Artifact names in `in` and `out` lists use these conventions: | `filename.md` | A specific document | `domain_model.md`, `product_definition.md` | | `dir/.ext` | A specific instance identified by parameter | `features/.feature`, `interview-notes/.md`, `adr/.md` | | `dir/*.ext` | Multiple documents of that type available in `in` | `interview-notes/*.md`, `adr/*.md` | -| `conceptual_name` | A runtime artifact that passes between states within a flow | `typed_source_stubs`, `test_implementations` | +| `conceptual_name` | A runtime artifact that passes between states within a flow | `typed-source-stubs`, `test-implementations` | Placeholders in template filenames and flow artifact paths use the `` pattern where **type** identifies the document kind and **_id** signals snake_case formatting. See template filenames for the canonical placeholder names. @@ -83,9 +83,9 @@ Placeholders in template filenames and flow artifact paths use the `` p **Wildcards (`*`)** in `in` indicate that multiple documents of that type are available. List the directory contents first, then read selectively based on the task. When a state creates a single instance, use a `` name instead. -**Runtime artifacts** (not backed by files) use descriptive names that make their purpose clear: `typed_source_stubs` (source files with type signatures only), `test_skeletons` (test files with structure only), `test_implementations` (tests with bodies), `source_implementations` (production code with behavior), `refactored_source` (code after refactoring pass), `feature_commits` (git commits for one feature), `merged_commits` (commits merged to local main), `root_cause_analysis` (analysis findings). +**Runtime artifacts** (not backed by files) use descriptive names that make their purpose clear: `typed-source-stubs` (source files with type signatures only), `test-skeletons` (test files with structure only), `test-implementations` (tests with bodies), `source-implementations` (production code with behavior), `refactored-source` (code after refactoring pass), `feature-commits` (git commits for one feature), `merged-commits` (commits merged to local main), `root-cause-analysis` (analysis findings). -**Environment artifacts** are produced by tooling rather than flow states: `coverage_reports` (test coverage output), `test_output` (test runner output), `linter_output` (linter output). These exist on disk after running the relevant tool and are referenced in `in` but not in any state's `out`. +**Environment artifacts** are produced by tooling rather than flow states: `coverage-reports` (test coverage output), `test-output` (test runner output), `linter-output` (linter output). These exist on disk after running the relevant tool and are referenced in `in` but not in any state's `out`. ## Flowr Commands @@ -179,7 +179,7 @@ Before starting a flow, create a session to track progress: python -m flowr session init --name ``` -For project-level flows (discovery, architecture, branding, setup), use a descriptive name like `project`. For feature flows, use the feature name. The session tracks the current flow, state, call stack (for subflows), and params (including `feature_name`). When the first state has a `flow:` field, `session init` auto-enters the subflow. +For project-level flows (discovery, architecture, branding, setup), use a descriptive name like `project`. For feature flows, use the feature name. The session tracks the current flow, state, call stack (for subflows), and params (including `feature-id`). When the first state has a `flow:` field, `session init` auto-enters the subflow. ### Branch Discipline @@ -187,7 +187,7 @@ States declare their git context in `attrs.git`: - `git: main`: all changes are committed to the local main branch - `git: feature`: all changes are committed to the current feature branch -Before exiting a project-phase flow (discovery, architecture, branding, setup), the exit transition requires `committed_to_main_locally: ==verified` evidence. This guarantees project artifacts are persisted before advancing to the next phase. +Before exiting a project-phase flow (discovery, architecture, branding, setup), the exit transition requires `committed-to-main-locally: ==verified` evidence. This guarantees project artifacts are persisted before advancing to the next phase. ### Within a State @@ -198,13 +198,13 @@ Announce the state once at the top, then go quiet: - `out`: May create or edit. Section sub-lists indicate which sections the state should produce or update. Follow the **out artifact protocol** (see below). - Files not in `out` must not be written to. If findings affect an artifact outside the output contract, flag them in output notes and defer the change to the step that owns that artifact. - The flow contract must always be followed unless the stakeholder explicitly asks to break it. - - **Cumulative editing:** When a flow loops back to a state that was previously executed (e.g., `needs_reinterview` → `stakeholder-interview` → `domain-discovery`), the `out` artifact is **edited**, not recreated. The agent reads the existing file, incorporates new information, and adjusts existing content. This is especially important for `domain_model.md` and `glossary.md` which accumulate knowledge across multiple discovery iterations. + - **Cumulative editing:** When a flow loops back to a state that was previously executed (e.g., `needs-reinterview` → `stakeholder-interview` → `domain-discovery`), the `out` artifact is **edited**, not recreated. The agent reads the existing file, incorporates new information, and adjusts existing content. This is especially important for `domain_model.md` and `glossary.md` which accumulate knowledge across multiple discovery iterations. - **Out artifact protocol:** Before writing to any `out` artifact: 1. Check if the file exists on disk. 2. **If it exists** → read it, then edit only the sections declared in the flow's `out` section sub-lists. Preserve existing content outside those sections. 3. **If it does not exist** → resolve the template path: take the destination path, prepend `.templates/`, append `.template` (e.g., `docs/spec/domain_model.md` → `.templates/docs/spec/domain_model.md.template`). Copy the template to the destination path, then edit the declared sections. Strip any template placeholders during editing. 4. **If no template exists** for a non-Python file referenced in `in`/`out`, raise an error for the stakeholder to decide. - 5. **Environment artifacts** (e.g., `coverage_reports`, `test_output`, `linter_output`) are produced by tooling rather than flow states. They exist on disk after running the relevant tool and are referenced in `in` but not in any state's `out`. + 5. **Environment artifacts** (e.g., `coverage-reports`, `test-output`, `linter-output`) are produced by tooling rather than flow states. They exist on disk after running the relevant tool and are referenced in `in` but not in any state's `out`. - **Specification documents are read-only during development.** During TDD and review cycles, the SE and reviewer may ONLY modify production code and test code. Spec document inconsistencies must be FLAGGED in output notes, not fixed directly. Spec docs are owned by other flow states and can only be changed through the appropriate flow step, after code is reviewed and approved. - **Flag issues with precise citations.** When flagging a problem during review or adversarial analysis, include file:line references (e.g., "domain_model.md:23 conflicts with login.feature:15"). Vague findings create rework. - **Do the work with the fewest, quietest commands.** Suppress verbose output. If a command can be scoped with a flag, pipe, or limit, use it. Don't dump full files or directory listings when a targeted query answers the question.