Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions dev/specs/infp-504-artifact-composition/checklists/requirements.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Specification Quality Checklist: Artifact Content Composition via Jinja2 Filters

**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2026-02-18
**Feature**: [spec.md](../spec.md)

## Content Quality

- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
Comment on lines +9 to +11
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Four checklist items are marked complete but contradict the spec's content.

Line Item Counter-evidence in spec.md
9 [x] No implementation details (languages, frameworks, APIs) FR-001/006/010 name InfrahubClient/InfrahubClientSync; FR-012 names file path infrahub_sdk/template/exceptions.py; FR-013 names client.object_store.get(identifier=storage_id); FR-002/007 name FilterDefinition, InfrahubFilters, Jinja2Template.
11 [x] Written for non-technical stakeholders The spec is addressed entirely to SDK implementers, with Python class names, method signatures, and SDK module references throughout.
19 [x] Success criteria are technology-agnostic (no implementation details) SC-005 names uv run pytest tests/unit/; SC-004 names InfrahubClient; SC-002 names validate(restricted=True).
30 [x] No implementation details leak into specification Same evidence as line 9.

If this spec intentionally blends product requirements with technical design (a "technical spec" rather than a pure PRD), update the checklist criteria to reflect that, or uncheck these items and add a note explaining the intentional deviation.

Also applies to: 19-19, 29-30

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dev/specs/001-artifact-composition/checklists/requirements.md` around lines 9
- 11, The checklist wrongly marks items as complete even though the spec
contains implementation details and is written for SDK implementers; update the
checklist in requirements.md by unchecking the four items at lines shown (the
items about "No implementation details", "Written for non-technical
stakeholders", "Success criteria are technology-agnostic", and "No
implementation details leak") and add a short note stating this is a technical
spec (not a pure PRD) calling out examples such as FR-001/006/010 naming
InfrahubClient/InfrahubClientSync, FR-012 file path
infrahub_sdk/template/exceptions.py, FR-013
client.object_store.get(identifier=storage_id), FR-002/007 naming
FilterDefinition/InfrahubFilters/Jinja2Template, and SC-005 referencing uv run
pytest tests/unit/ so reviewers understand the intentional deviation.

- [x] All mandatory sections completed

## Requirement Completeness

- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified

## Feature Readiness

- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification

## Notes

- One open question remains intentionally: whether to add a Python transform convenience SDK method (FR scope question flagged in Open Questions section, documented for planning phase).
- Ordering guarantee is explicitly out of scope and documented as a known limitation.
- `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning.
Comment on lines +34 to +36
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing blank line after the Notes list.

The Notes list ends at line 36 (end of file) with no trailing blank line, violating the Markdown convention.

📝 Proposed fix
 - `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning.
+

As per coding guidelines: "Add blank lines before and after lists in Markdown files."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- One open question remains intentionally: whether to add a Python transform convenience SDK method (FR scope question flagged in Open Questions section, documented for planning phase).
- Ordering guarantee is explicitly out of scope and documented as a known limitation.
- `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning.
- One open question remains intentionally: whether to add a Python transform convenience SDK method (FR scope question flagged in Open Questions section, documented for planning phase).
- Ordering guarantee is explicitly out of scope and documented as a known limitation.
- `from_json`/`from_yaml` existence in the current filter set is flagged as an assumption to verify during planning.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dev/specs/001-artifact-composition/checklists/requirements.md` around lines
34 - 36, The Markdown file ends with a Notes list but lacks a trailing blank
line; update dev/specs/001-artifact-composition/checklists/requirements.md by
adding a single blank line after the final list item (the lines starting with
"One open question remains..." / "Ordering guarantee..." /
"`from_json`/`from_yaml` existence...") so there is an empty line at EOF,
conforming to the guideline to add blank lines before and after lists.

161 changes: 161 additions & 0 deletions dev/specs/infp-504-artifact-composition/spec.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
# Feature specification: Artifact content composition via Jinja2 filters

**Feature Branch**: `infp-504-artifact-composition`
**Created**: 2026-02-18
**Status**: Draft
**Jira**: INFP-504 (part of INFP-304 Artifact of Artifacts initiative)

## Overview

Enable customers building modular configuration pipelines to compose larger artifacts from smaller sub-artifacts by referencing and inlining rendered artifact content directly inside a Jinja2 transform, without duplicating template logic or GraphQL query fields.

## User scenarios & testing *(mandatory)*

### User story 1 - inline artifact content in a composite template (Priority: P1)

A network engineer maintains separate section-level artifacts for routing policy, interfaces, and base config. They want a composite "startup config" artifact whose Jinja2 template pulls in each section's rendered content via a `storage_id` already present in the GraphQL query result — without copy-pasting template logic.

The template uses `artifact.node.storage_id.value | artifact_content` and the rendered output assembles all sections automatically.

**Why this priority**: This is the primary use case that delivers the modular pipeline capability. Everything else in this feature supports or extends it.

**Independent Test**: A Jinja2 template calling `artifact_content` with a valid storage_id can be rendered against a real or mocked Infrahub instance and the output matches the expected concatenated artifact contents.

**Acceptance Scenarios**:

1. **Given** a `Jinja2Template` constructed with a valid `InfrahubClient` and a template calling `storage_id | artifact_content`, **When** the template is rendered with a data dict containing a valid storage_id string, **Then** the output contains the raw string content fetched from the object store.
2. **Given** the same setup but the storage_id is null or the object store cannot retrieve the content, **When** rendered, **Then** the filter raises a descriptive error indicating the retrieval failure.
3. **Given** a `Jinja2Template` constructed *without* an `InfrahubClient` and a template calling `artifact_content`, **When** rendered, **Then** an error is raised with a message clearly stating that an `InfrahubClient` is required for this filter.
4. **Given** a template using `artifact_content` and `validate(restricted=True)` is called, **Then** a `JinjaTemplateOperationViolationError` is raised, confirming the filter is blocked in local restricted mode.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that I understand where this one is coming from


---

### User story 2 - inline file object content in a composite template (Priority: P2)

A template author needs to embed the content of a stored file object (as distinct from an artifact) into a Jinja2 template. They use `storage_id | file_object_content` and the same injection and error-handling behaviour applies.

**Why this priority**: Mirrors `artifact_content` for the file-object use case; same implementation pattern, lower novelty.

**Independent Test**: A template calling `file_object_content` renders correctly with a valid storage_id, and raises a descriptive error for null or unresolvable storage_ids.

**Acceptance Scenarios**:

1. **Given** a `Jinja2Template` with a client and a valid file-object storage_id, **When** rendered, **Then** the raw file content string is returned.
2. **Given** a null or missing storage_id value, **When** the filter is invoked, **Then** an error is raised with a descriptive message about the retrieval failure.
3. **Given** no client provided to `Jinja2Template`, **When** the filter is invoked, **Then** an error is raised.

---

### User story 3 - parse structured artifact content in a template (Priority: P3)

A template author retrieves a JSON-formatted artifact and needs to traverse its structure as a dict within the template. They chain `storage_id | artifact_content | from_json` to obtain a parsed object, then access fields normally.

**Why this priority**: Unlocks structured composition use cases; depends on `artifact_content` (P1) being in place. `from_json`/`from_yaml` are useful in isolation too.

**Independent Test**: A template chaining `artifact_content | from_json` renders correctly and the output reflects values from parsed JSON fields.

**Acceptance Scenarios**:

1. **Given** a template using `storage_id | artifact_content | from_json`, **When** rendered with a storage_id pointing to valid JSON content, **Then** the template can access keys of the parsed object.
2. **Given** `storage_id | artifact_content | from_yaml`, **When** rendered with YAML content, **Then** the template can access keys of the parsed mapping.
3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict or appropriate empty value without raising.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

US3 AC3 and FR-008 conflict on from_yaml empty-string return value.

Line 61 (US3 AC3) allows "an empty dict or appropriate empty value", which could permit None. FR-008 (line 103) mandates "empty dict" unconditionally. In practice, yaml.safe_load("") returns None — not {} — because an empty YAML document is valid and maps to Python None. json.loads("") raises json.JSONDecodeError. Both filters therefore require explicit special-casing; the spec needs to align on exactly one contract ({} or None) so the test in SC-005 and the implementation agree.

📝 Suggested fix — align AC3 with FR-008
-3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict or appropriate empty value without raising.
+3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict (`{}`) without raising. *(Note: both filters must explicitly handle the empty-string edge case, as `yaml.safe_load("")` returns `None` and `json.loads("")` raises an error by default.)*
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict or appropriate empty value without raising.
3. **Given** `from_json` or `from_yaml` applied to an empty string (for example, a template variable that is explicitly empty), **When** rendered, **Then** the filter returns an empty dict (`{}`) without raising. *(Note: both filters must explicitly handle the empty-string edge case, as `yaml.safe_load("")` returns `None` and `json.loads("")` raises an error by default.)*
🧰 Tools
🪛 LanguageTool

[style] ~61-~61: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...access keys of the parsed mapping. 3. Given from_json or from_yaml applied to...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dev/specs/infp-504-artifact-composition/spec.md` at line 61, US3 AC3
currently allows "an empty dict or appropriate empty value" for
from_json/from_yaml when given an empty string, which conflicts with FR-008 that
mandates an empty dict; update the spec and implementations to normalize
empty-string inputs to return an empty dict (not None) so tests in SC-005 align.
Concretely, change the AC3 wording to require "{}" for empty input and ensure
the from_yaml and from_json functions (identify by name: from_yaml, from_json)
special-case empty-string input and return {} instead of relying on
yaml.safe_load("") or json.loads("") behavior; add a short unit test asserting
that from_yaml("") and from_json("") yield {}.


---

### User story 4 - security gate blocks filters in computed attributes context (Priority: P1)

The Infrahub API server executes computed attributes locally and must block `artifact_content` and `file_object_content` because no network calls should be made within that context. Prefect workers run inside Infrahub with a client and must be able to use these filters. Other currently-untrusted Jinja2 filters (for example, `safe`, `attr`) must remain subject to their existing restriction rules — this feature must not inadvertently widen their permissions.

The existing single `restricted: bool` parameter on `validate()` is insufficient: flipping it to `False` to permit Infrahub filters would also permit all other untrusted filters. The validation mechanism must be extended to express at least three distinct execution contexts.

**Why this priority**: Preventing these filters from running in the computed attributes context is a hard requirement. Shares P1 priority with User Story 1.

**Independent Test**: Validation in the computed-attributes context raises `JinjaTemplateOperationViolationError` for templates using `artifact_content` or `file_object_content`. Validation in the Prefect-worker context passes for the same templates. Neither context changes the restriction behaviour of other currently-untrusted filters.

**Acceptance Scenarios**:

1. **Given** a template referencing `artifact_content`, **When** validated in the computed-attributes context, **Then** `JinjaTemplateOperationViolationError` is raised.
2. **Given** the same template, **When** validated in the Prefect-worker context with a client-initialised `Jinja2Template`, **Then** validation passes.
3. **Given** a template using an existing untrusted filter (for example, `safe`), **When** validated in the Prefect-worker context, **Then** `JinjaTemplateOperationViolationError` is still raised — the Prefect-worker context does not unlock other untrusted filters.

---

### Edge cases

- What happens if a storage_id value is `None` (Python None) rather than a missing string? Both cases must raise a descriptive error.
- What if the object store raises a network or authentication error mid-render? All error conditions (null storage_id, not-found, auth failure, network failure) raise exceptions — there is no silent fallback.
- What if `from_json` or `from_yaml` already exists in the netutils filter set? De-duplicate rather than shadow.
- What happens when `from_json` or `from_yaml` receives malformed content (invalid JSON/YAML syntax)? `JinjaFilterError` is raised — no silent fallback.
- What if the same filter name is registered twice (for example, a user-supplied filter that shadows `artifact_content`)? Existing override behaviour should be preserved.
- File-based templates use a regular `Environment` (not sandboxed); the new filters must be injected correctly in both cases.

## Requirements *(mandatory)*

### Functional requirements

- **FR-001**: `Jinja2Template.__init__` MUST accept an optional `client` parameter of type `InfrahubClient | None` (default `None`). `InfrahubClientSync` is not supported.
- **FR-002**: A dedicated class (for example, `InfrahubFilters`) MUST be introduced to hold the client reference and expose the Infrahub-specific filter callable methods. `Jinja2Template` instantiates this class when a client is provided and registers its filters into the Jinja2 environment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not clear to me why we need that

- **FR-003**: The system MUST provide an `artifact_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced artifact, using the artifact-specific API path.
- **FR-004**: The system MUST provide a `file_object_content` Jinja2 filter that accepts a `storage_id` string and returns the raw string content of the referenced file object, using the file-object-specific API path or metadata handling — this implementation is distinct from `artifact_content`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the storage id enough to pull the content of a file object ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, there is no mention of permission in this doc but the object file API will check the permission before returning the content, it feels like it should be mentioned in the spec, event if we just bypass it for now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the object return is in binary format and not text based ?

- **FR-005**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when the input `storage_id` is null or empty, or when the object store cannot retrieve the content for any reason (not found, network failure, auth failure).
- **FR-006**: Both `artifact_content` and `file_object_content` MUST raise `JinjaFilterError` when invoked and no `InfrahubClient` was supplied to `Jinja2Template` at construction time. The error message MUST name the filter and explain that an `InfrahubClient` is required.
- **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available.
Comment on lines +100 to +102
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

FR-007 is internally inconsistent with the three-context validation model — mark as provisional.

FR-007 registers both new filters as trusted=False, but this puts them in the same bucket as safe (and similar currently-untrusted filters). User Story 4, Acceptance Scenario 3 explicitly requires that other trusted=False filters still raise JinjaTemplateOperationViolationError in the Prefect-worker context. With the current binary validate(restricted: bool), there is no way to simultaneously:

  • block artifact_content in the computed-attributes context ✓
  • allow artifact_content in the Prefect-worker context ✓
  • continue blocking safe in the Prefect-worker context ✓

The Open Questions section (line 151) correctly identifies the enum migration needed, but FR-007 reads as a final requirement rather than a provisional one. An implementer reading FR-007 in isolation could register the new filters identically to safe and still satisfy the wording — which would break US4/SC-002.

📝 Suggested wording to make FR-007 provisional
-- **FR-007**: Both `artifact_content` and `file_object_content` MUST be registered with `trusted=False` in the `FilterDefinition` registry so that `validate(restricted=True)` blocks them in the computed attributes execution context (Infrahub API server). They are only permitted to execute on Prefect workers, where an `InfrahubClient` is available.
+- **FR-007**: Both `artifact_content` and `file_object_content` MUST be blocked in the computed attributes execution context (Infrahub API server) and permitted in the Prefect-worker execution context. *Provisional implementation*: register with `trusted=False` so that the existing `validate(restricted=True)` blocks them initially. **This requirement depends on resolution of the "Validation level model" open question**, which must introduce a dedicated worker-context tag to distinguish these filters from currently-untrusted filters (e.g., `safe`) that must remain blocked even in the Prefect-worker context.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dev/specs/001-artifact-composition/spec.md` around lines 100 - 102, FR-007 is
inconsistent with the three-context validation model: registering
artifact_content and file_object_content as trusted=False will equate them with
existing filters like safe and break the intended Prefect-worker allowance;
update the spec wording to mark FR-007 as provisional and add a note that
FilterDefinition must be extended (e.g., a new enum or flag) to distinguish
"worker-allowed" filters from general trusted/untrusted, and reference
artifact_content, file_object_content, FilterDefinition,
validate(restricted=True), safe, and JinjaTemplateOperationViolationError so
implementers know to avoid simply copying safe's trusted=False registration and
instead await the enum migration described in Open Questions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could instead of checking at runtime if once of these filters is used in a computed attribute, could we do a sanity check when the user add / update a computed attribute instead ?

- **FR-008**: The system MUST provide `from_json` and `from_yaml` Jinja2 filters (adding them only if not already present in the environment) that parse a string into a Python dict/list. Applying them to an empty string MUST return an empty dict without raising. Applying them to malformed content MUST raise `JinjaFilterError`.
- **FR-009**: `from_json` and `from_yaml` MUST be registered as trusted filters (`trusted=True`) since they perform no external I/O.
- **FR-010**: All new filters MUST work correctly with `InfrahubClient` (async). `InfrahubClientSync` is not a supported client type for `Jinja2Template`.
- **FR-011**: All `JinjaFilterError` instances MUST carry an actionable error message that identifies the filter name, the cause of failure, and any remediation hint (for example: "artifact_content requires an InfrahubClient — pass one via Jinja2Template(client=...)").
- **FR-012**: A new `JinjaFilterError` exception class MUST be added to `infrahub_sdk/template/exceptions.py` as a subclass of `JinjaTemplateError`.
- **FR-013**: Documentation MUST include a Python transform example demonstrating artifact content retrieval via `client.object_store.get(identifier=storage_id)`. No new SDK convenience method will be added.

### Key entities

- **`Jinja2Template`**: Gains an optional `client` constructor parameter; delegates client-bound filter registration to `InfrahubFilters`.
- **`InfrahubFilters`**: New class that holds an `InfrahubClient` reference and exposes `artifact_content`, `file_object_content`, and any other client-dependent filter methods. Registered into the Jinja2 filter map when a client is provided.
- **`FilterDefinition`**: Existing dataclass used to declare filter `name`, `trusted` flag, and `source`. New entries are added here for all new filters.
- **`ObjectStore` / `ObjectStoreSync`**: Existing async/sync storage clients used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ObjectStoreSync is listed as a key entity despite InfrahubClientSync being explicitly unsupported.

FR-001 and FR-010 both rule out InfrahubClientSync, so InfrahubFilters will only ever hold an async InfrahubClient, which internally uses ObjectStore. Listing ObjectStore / ObjectStoreSync with the phrasing "used by InfrahubFilters" implies both are in scope and could mislead implementers into adding a sync code path.

📝 Suggested fix
-- **`ObjectStore` / `ObjectStoreSync`**: Existing async/sync storage clients used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls.
+- **`ObjectStore`**: Existing async storage client used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls. (`ObjectStoreSync` is not used; `InfrahubClientSync` is explicitly out of scope — see FR-001, FR-010.)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- **`ObjectStore` / `ObjectStoreSync`**: Existing async/sync storage clients used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls.
- **`ObjectStore`**: Existing async storage client used by `InfrahubFilters` to perform `get(identifier=storage_id)` calls. (`ObjectStoreSync` is not used; `InfrahubClientSync` is explicitly out of scope — see FR-001, FR-010.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dev/specs/infp-504-artifact-composition/spec.md` at line 115, Change the spec
to remove or reword the "ObjectStore / ObjectStoreSync" line so it doesn't imply
a sync path; specifically, delete the reference to ObjectStoreSync (or
explicitly state it is out of scope) and clarify that InfrahubFilters holds an
async InfrahubClient which uses ObjectStore (not ObjectStoreSync), referencing
InfrahubFilters, InfrahubClient, InfrahubClientSync, ObjectStore, and
ObjectStoreSync so readers see that InfrahubClientSync/ObjectStoreSync are
unsupported and only the async ObjectStore path should be implemented.

- **`JinjaFilterError`**: New exception class, subclass of `JinjaTemplateError`, raised by `InfrahubFilters` methods on all filter-level failures (no client, null/empty storage_id, retrieval error).

## Success criteria *(mandatory)*

### Measurable outcomes

- **SC-001**: A composite Jinja2 artifact template using `artifact_content` renders successfully end-to-end (integration test), with output containing all expected sub-artifact content.
- **SC-002**: `validate(restricted=True)` on any template referencing `artifact_content` or `file_object_content` always raises a security violation — zero false negatives across the test suite.
- **SC-003**: All filter error conditions (no client, null/empty storage_id, retrieval failure) produce a descriptive, actionable error message — no silent failures, no raw tracebacks as the primary user-facing message.
- **SC-004**: The async execution path (`InfrahubClient`) is covered by unit tests with no regressions to existing filter behaviour.
- **SC-005**: The full unit test suite (`uv run pytest tests/unit/`) passes without modification after the feature is added.
- **SC-006**: A template chaining `artifact_content | from_json` or `artifact_content | from_yaml` can access parsed fields from a structured artifact in a rendered output.

## Assumptions

- The `artifact_content` and `file_object_content` filters receive a `storage_id` string directly from the template variable context — extracted from the GraphQL query result by the template author. The filter does not resolve artifact names — it operates on storage IDs only.
- Ordering of artifact generation is a known limitation: artifacts may be generated in parallel. This is a documented constraint, not something this feature enforces. Future event-driven pipeline work (INFP-227) will address ordering.
- `from_json` and `from_yaml` are not currently present in the builtin or netutils filter sets; they will be added as part of this feature. If they already exist, the implementation de-duplicates rather than overrides.
- All failure modes from the filters (null storage_id, empty storage_id, object not found, network error, auth error) raise exceptions. There is no silent fallback to an empty string.
- The permitted execution context for `artifact_content` and `file_object_content` is Prefect workers only. The computed attributes path in the Infrahub API server always runs `validate(restricted=True)`, which blocks these filters before rendering begins.
- The `InfrahubFilters` class provides synchronous callables to Jinja2's filter map; the underlying client is always `InfrahubClient` (async). Async I/O calls are handled consistently with the SDK's existing pattern.

## Dependencies & constraints

- Depends on `ObjectStore.get(identifier)` in `infrahub_sdk/object_store.py`.
- Depends on the existing `FilterDefinition` dataclass and `trusted` flag mechanism in `infrahub_sdk/template/filters.py`.
- Depends on the existing `validate(restricted=True)` security mechanism in `Jinja2Template`.
- Must not break any existing filter behaviour or the `validate()` contract.
- No new external Python dependencies may be introduced without approval.
- Related: INFP-304 (Artifact of Artifacts), INFP-496 (Modular GraphQL queries), INFP-227 (Modular generators / event-driven pipeline).

## Open questions

- **Filter naming**: `artifact_content` is the working name. Alternatives are open.
- **Sandboxed environment injection**: The `render_jinja2_template` method in `integrator.py` has access to `self.sdk`; the exact threading path to pass the client into `Jinja2Template` needs investigation during planning.
- **Validation level model**: The current `validate(restricted: bool)` parameter is too coarse to express the three distinct execution contexts this feature requires. A natural evolution would be to replace the boolean with an enum (for example: `core` for the Infrahub API server, `worker` for Prefect background workers, `untrusted` for fully restricted local execution). Filters tagged as `worker`-only would be blocked in the `core` context but permitted in the `worker` context, while `trusted` filters remain available in all contexts. The exact enum design and migration of existing call sites is a technical decision for the implementation plan, but the interface change should be considered up front to avoid needing to revisit `validate()` again later.

## Clarifications

### Session 2026-02-18

- Q: Are `artifact_content` and `file_object_content` identical at the storage API level, or do they use different API paths / metadata handling? → A: Different implementations — `file_object_content` uses a different API path or carries different metadata handling than `artifact_content`.
- Q: Where are these filters permitted to execute, and what mechanism enforces the boundary? → A: Blocked in computed attributes (executed locally in the Infrahub API server, which uses `validate(restricted=True)`); permitted on Prefect workers, which have access to an `InfrahubClient`. The `trusted=False` registration enforces this boundary via the existing restricted-mode validation.
- Q: What exception class should filter-level errors (no client, retrieval failure) raise? → A: A new `JinjaFilterError` class that is a child of the existing `JinjaTemplateError` base class.
- Q: Should the SDK expose a convenience method for artifact content retrieval in Python transforms? → A: No new method — document `client.object_store.get(identifier=storage_id)` directly.
- Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input.
Comment on lines +157 to +161
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing blank line after the final list.

The Clarifications list ends at line 161 (end of file) without a trailing blank line, violating the project's Markdown convention.

📝 Proposed fix
 - Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input.
+

As per coding guidelines: "Add blank lines before and after lists in Markdown files."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Q: Are `artifact_content` and `file_object_content` identical at the storage API level, or do they use different API paths / metadata handling? → A: Different implementations — `file_object_content` uses a different API path or carries different metadata handling than `artifact_content`.
- Q: Where are these filters permitted to execute, and what mechanism enforces the boundary? → A: Blocked in computed attributes (executed locally in the Infrahub API server, which uses `validate(restricted=True)`); permitted on Prefect workers, which have access to an `InfrahubClient`. The `trusted=False` registration enforces this boundary via the existing restricted-mode validation.
- Q: What exception class should filter-level errors (no client, retrieval failure) raise? → A: A new `JinjaFilterError` class that is a child of the existing `JinjaTemplateError` base class.
- Q: Should the SDK expose a convenience method for artifact content retrieval in Python transforms? → A: No new method — document `client.object_store.get(identifier=storage_id)` directly.
- Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input.
- Q: Are `artifact_content` and `file_object_content` identical at the storage API level, or do they use different API paths / metadata handling? → A: Different implementations — `file_object_content` uses a different API path or carries different metadata handling than `artifact_content`.
- Q: Where are these filters permitted to execute, and what mechanism enforces the boundary? → A: Blocked in computed attributes (executed locally in the Infrahub API server, which uses `validate(restricted=True)`); permitted on Prefect workers, which have access to an `InfrahubClient`. The `trusted=False` registration enforces this boundary via the existing restricted-mode validation.
- Q: What exception class should filter-level errors (no client, retrieval failure) raise? → A: A new `JinjaFilterError` class that is a child of the existing `JinjaTemplateError` base class.
- Q: Should the SDK expose a convenience method for artifact content retrieval in Python transforms? → A: No new method — document `client.object_store.get(identifier=storage_id)` directly.
- Q: What should `from_json`/`from_yaml` do on malformed input? → A: Raise `JinjaFilterError` on malformed JSON or YAML input.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dev/specs/001-artifact-composition/spec.md` around lines 157 - 161, The
Clarifications list at the end of the file lacks a trailing blank line; update
the final list in the "Clarifications" section (the bullet list ending at the
end of file) to include a blank line after the list so the Markdown convention
"Add blank lines before and after lists" is satisfied — ensure there's exactly
one empty line following the final list item and that any preceding list start
also has a blank line above it if missing.

1 change: 1 addition & 0 deletions specs