Skip to content

Comments

Add the ability to export the schema in yaml#835

Open
BeArchiTek wants to merge 9 commits intostablefrom
bkr-add-schema-exporter
Open

Add the ability to export the schema in yaml#835
BeArchiTek wants to merge 9 commits intostablefrom
bkr-add-schema-exporter

Conversation

@BeArchiTek
Copy link
Contributor

@BeArchiTek BeArchiTek commented Feb 18, 2026

Fixes #151

Summary by CodeRabbit

  • New Features

    • Added infrahubctl schema export command to export Infrahub schemas as YAML files organized by namespace with configurable output directory and optional filtering by branch and namespace.
  • Documentation

    • Added comprehensive documentation for the new schema export command including usage examples and all available options (directory, branch, namespace, debug flags, and config file support).

BeArchiTek and others added 4 commits February 18, 2026 15:27
- Strip relationship fields that match schema loading defaults
  (direction: bidirectional, on_delete: no-action, cardinality: many,
  optional: true, min_count: 0, max_count: 0)
- Exclude branch from attribute/relationship dumps (inherited from node)
- Ensure scalar fields appear before attributes/relationships lists

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
read_only was unconditionally excluded from attribute dumps; move it to
_ATTR_EXPORT_DEFAULTS so it is stripped only when False (the default)
and kept when True (computed/read-only attributes).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…export

- Drop relationships with kind Group, Profile, or Hierarchy from export
  output — these are always auto-generated by Infrahub and must not be
  re-loaded manually (doing so caused validator_not_available errors for
  hierarchical generics)
- For GenericSchemaAPI objects that had Hierarchy relationships, restore
  the `hierarchical: true` flag so the schema round-trips cleanly
- Add read_only: false to _REL_EXPORT_DEFAULTS to stop it leaking into
  relationship output

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ints

- generics key now appears before nodes in the exported YAML payload
- uniqueness_constraints entries that are auto-generated from unique: true
  attributes (single-field ["<attr>__value"] entries) are removed on
  export; user-defined multi-field constraints are preserved

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@BeArchiTek BeArchiTek requested a review from a team as a code owner February 18, 2026 15:55
@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Feb 18, 2026

Deploying infrahub-sdk-python with  Cloudflare Pages  Cloudflare Pages

Latest commit: 56e0618
Status: ✅  Deploy successful!
Preview URL: https://a39510c7.infrahub-sdk-python.pages.dev
Branch Preview URL: https://bkr-add-schema-exporter.infrahub-sdk-python.pages.dev

View logs

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 64.47368% with 27 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
infrahub_sdk/ctl/schema.py 20.83% 19 Missing ⚠️
infrahub_sdk/schema/export.py 69.23% 4 Missing and 4 partials ⚠️
@@            Coverage Diff             @@
##           stable     #835      +/-   ##
==========================================
- Coverage   80.38%   80.21%   -0.18%     
==========================================
  Files         115      116       +1     
  Lines        9926     9951      +25     
  Branches     1515     1517       +2     
==========================================
+ Hits         7979     7982       +3     
- Misses       1423     1444      +21     
- Partials      524      525       +1     
Flag Coverage Δ
integration-tests 41.14% <6.57%> (-0.15%) ⬇️
python-3.10 51.34% <44.73%> (-0.25%) ⬇️
python-3.11 51.34% <44.73%> (-0.22%) ⬇️
python-3.12 51.34% <44.73%> (-0.25%) ⬇️
python-3.13 51.32% <44.73%> (-0.25%) ⬇️
python-3.14 52.97% <47.36%> (-0.23%) ⬇️
python-filler-3.12 24.03% <19.73%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
infrahub_sdk/schema/__init__.py 71.46% <100.00%> (+2.12%) ⬆️
infrahub_sdk/schema/export.py 69.23% <69.23%> (ø)
infrahub_sdk/ctl/schema.py 50.31% <20.83%> (-14.40%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Move export helpers from ctl/schema.py to infrahub_sdk/schema/export.py
so the conversion logic is reusable outside the CLI. Regenerate
infrahubctl docs to include the new export subcommand.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions github-actions bot added the type/documentation Improvements or additions to documentation label Feb 19, 2026
"Schema",
"Profile",
"Template",
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing it like this I'd suggest that we have this part be included within the schema component of the SDK. The backend will return this information (to indicate if the namespace is editable by the user or not). It could be that the SDK doesn't yet deal with that part of the API. But we should not introduce a new list for this and support them separately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do it as part of another PR later on ?

@opsmill opsmill deleted a comment from coderabbitai bot Feb 19, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

Walkthrough

This change introduces a new schema export capability for Infrahub. A new CLI command is added that exports schemas in YAML format, supporting optional filtering by namespace and branch. The implementation includes logic to exclude internal fields, restricted namespaces (Core, Builtin), and auto-generated constraints to keep exports clean and reimportable. The schema data is serialized per namespace into separate YAML files with a UTC timestamp-based output directory. Both asynchronous and synchronous client support is provided, along with comprehensive unit tests covering the export behavior and API integration.

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description only contains 'Fixes #151' without any substantive details about the changes, implementation approach, or testing methodology. Expand the description to explain the changes made, key implementation details, and provide testing instructions as outlined in the template.
Docstring Coverage ⚠️ Warning Docstring coverage is 47.83% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'Add the ability to export the schema in yaml' clearly and concisely describes the main feature being added, aligning with the changeset across CLI, SDK, and documentation.
Linked Issues check ✅ Passed The PR implementation fully addresses issue #151 requirements: adds infrahubctl schema export command with namespace filtering, cleans up exported schemas by excluding auto-generated fields, and exports to YAML format enabling reimport.
Out of Scope Changes check ✅ Passed All changes are directly related to implementing the schema export feature specified in issue #151; no extraneous modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
infrahub_sdk/constants.py (1)

6-19: Consider using frozenset for RESTRICTED_NAMESPACES.

As a constant used exclusively for membership tests (in), frozenset gives O(1) lookup and prevents accidental mutation.

♻️ Proposed change
-RESTRICTED_NAMESPACES: list[str] = [
-    "Account",
-    "Branch",
-    "Builtin",
-    "Core",
-    "Deprecated",
-    "Diff",
-    "Infrahub",
-    "Internal",
-    "Lineage",
-    "Schema",
-    "Profile",
-    "Template",
-]
+RESTRICTED_NAMESPACES: frozenset[str] = frozenset({
+    "Account",
+    "Branch",
+    "Builtin",
+    "Core",
+    "Deprecated",
+    "Diff",
+    "Infrahub",
+    "Internal",
+    "Lineage",
+    "Schema",
+    "Profile",
+    "Template",
+})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrahub_sdk/constants.py` around lines 6 - 19, RESTRICTED_NAMESPACES is
currently a mutable list but used only for membership checks; change its type to
an immutable frozenset to prevent accidental mutation and get O(1) lookups by
replacing the list literal with a frozenset literal (e.g., frozenset({...}))
where RESTRICTED_NAMESPACES is defined in infrahub_sdk/constants.py and ensure
any code that treats it as a sequence (indexing/ordering) is not relying on list
behavior.
infrahub_sdk/schema/__init__.py (1)

533-553: export() always fetches the full schema even when namespaces is specified.

The client-side namespace filtering in _build_export_schemas is applied after fetching all schemas from the server. For large deployments, passing the namespace filter to _fetch (already supported via the namespaces parameter of InfrahubSchema.fetch) would reduce payload size. Not a bug, but worth considering for performance at scale.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrahub_sdk/schema/__init__.py` around lines 533 - 553, The export() method
currently fetches the full schema then filters client-side; change it to pass
the provided namespaces to the server fetch to reduce payload. Update the call
in export (function export) from awaiting self.fetch(branch=branch) to await
self.fetch(branch=branch, namespaces=namespaces) so server-side filtering is
used before calling _build_export_schemas(schema_nodes=...,
namespaces=namespaces); ensure the namespaces parameter shape (None or
list[str]) matches InfrahubSchema.fetch's namespaces arg.
tests/unit/sdk/test_schema_export.py (1)

19-97: Duplicated test data builders across both test files.

_BASE_NODE, _BASE_GENERIC, _schema_response, and the _make_* helpers are defined in near-identical form in both tests/unit/sdk/test_schema_export.py and tests/unit/ctl/test_schema_export.py. Extracting these into a shared tests/helpers/schema.py module would eliminate the duplication.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/sdk/test_schema_export.py` around lines 19 - 97, Duplicate test
data and builders (_BASE_NODE, _BASE_GENERIC, _schema_response and helpers
_make_node_schema, _make_generic_schema, _make_profile_schema,
_make_template_schema) should be extracted into a single shared helper module
and both test files should import them; create a new helpers module that exports
those constants and factory functions, replace the duplicated definitions in
tests/unit/sdk/test_schema_export.py and tests/unit/ctl/test_schema_export.py
with imports from the new module, and update any references to use the imported
symbols so tests still construct NodeSchemaAPI, GenericSchemaAPI,
ProfileSchemaAPI, and TemplateSchemaAPI using the shared builders.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@infrahub_sdk/ctl/schema.py`:
- Around line 217-219: The default export directory function
_default_export_directory currently returns a str while the export command's
directory parameter is annotated as Path; update _default_export_directory to
return a Path (e.g., wrap the formatted string with Path(...)) so the default
value matches the annotated type for export, and/or add a unit/integration test
that calls the export CLI without --directory to assert the resulting type is
Path and no AttributeError occurs (look for references to
_default_export_directory and the export command / directory parameter to modify
code and add tests).

---

Nitpick comments:
In `@infrahub_sdk/constants.py`:
- Around line 6-19: RESTRICTED_NAMESPACES is currently a mutable list but used
only for membership checks; change its type to an immutable frozenset to prevent
accidental mutation and get O(1) lookups by replacing the list literal with a
frozenset literal (e.g., frozenset({...})) where RESTRICTED_NAMESPACES is
defined in infrahub_sdk/constants.py and ensure any code that treats it as a
sequence (indexing/ordering) is not relying on list behavior.

In `@infrahub_sdk/schema/__init__.py`:
- Around line 533-553: The export() method currently fetches the full schema
then filters client-side; change it to pass the provided namespaces to the
server fetch to reduce payload. Update the call in export (function export) from
awaiting self.fetch(branch=branch) to await self.fetch(branch=branch,
namespaces=namespaces) so server-side filtering is used before calling
_build_export_schemas(schema_nodes=..., namespaces=namespaces); ensure the
namespaces parameter shape (None or list[str]) matches InfrahubSchema.fetch's
namespaces arg.

In `@tests/unit/sdk/test_schema_export.py`:
- Around line 19-97: Duplicate test data and builders (_BASE_NODE,
_BASE_GENERIC, _schema_response and helpers _make_node_schema,
_make_generic_schema, _make_profile_schema, _make_template_schema) should be
extracted into a single shared helper module and both test files should import
them; create a new helpers module that exports those constants and factory
functions, replace the duplicated definitions in
tests/unit/sdk/test_schema_export.py and tests/unit/ctl/test_schema_export.py
with imports from the new module, and update any references to use the imported
symbols so tests still construct NodeSchemaAPI, GenericSchemaAPI,
ProfileSchemaAPI, and TemplateSchemaAPI using the shared builders.

@BeArchiTek BeArchiTek self-assigned this Feb 20, 2026
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
infrahub_sdk/ctl/schema.py (1)

247-247: Extract "1.0" to a named constant.

The schema file format version is hardcoded inline. If it ever needs to change (or if other parts of the codebase need to reference the same version), hunting for the magic string will be error-prone. A module-level constant (e.g., in infrahub_sdk/constants.py, which already defines RESTRICTED_NAMESPACES) makes the contract explicit.

♻️ Proposed refactor

In infrahub_sdk/constants.py (or a suitable constants module):

+ SCHEMA_FILE_VERSION = "1.0"

Then in infrahub_sdk/ctl/schema.py:

- from ..async_typer import AsyncTyper
+ from ..async_typer import AsyncTyper
+ from ..constants import SCHEMA_FILE_VERSION   # add alongside existing imports
-        payload: dict[str, Any] = {"version": "1.0"}
+        payload: dict[str, Any] = {"version": SCHEMA_FILE_VERSION}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrahub_sdk/ctl/schema.py` at line 247, The schema file version is hardcoded
as "1.0" in the payload definition; extract it to a module-level constant (e.g.,
SCHEMA_FILE_VERSION) and reference that constant instead of the string literal.
Add SCHEMA_FILE_VERSION = "1.0" to the existing constants module
(infrahub_sdk/constants.py alongside RESTRICTED_NAMESPACES), then update the
payload assignment in infrahub_sdk/ctl/schema.py (where payload: dict[str, Any]
= {"version": "1.0"}) to use the imported SCHEMA_FILE_VERSION constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@infrahub_sdk/ctl/schema.py`:
- Around line 217-219: The _default_export_directory function is correct: ensure
it returns a pathlib.Path by keeping the return type Path and the f-string
naming (function _default_export_directory) that constructs
Path(f"infrahub-schema-export-{timestamp}"); no changes required—leave the
timestamp generation (datetime.now(timezone.utc).astimezone().strftime(...)) and
the Path(...) return as-is.

---

Nitpick comments:
In `@infrahub_sdk/ctl/schema.py`:
- Line 247: The schema file version is hardcoded as "1.0" in the payload
definition; extract it to a module-level constant (e.g., SCHEMA_FILE_VERSION)
and reference that constant instead of the string literal. Add
SCHEMA_FILE_VERSION = "1.0" to the existing constants module
(infrahub_sdk/constants.py alongside RESTRICTED_NAMESPACES), then update the
payload assignment in infrahub_sdk/ctl/schema.py (where payload: dict[str, Any]
= {"version": "1.0"}) to use the imported SCHEMA_FILE_VERSION constant.

Resolve conflicts:
- Keep export command in ctl/schema.py (removed on stable, added on branch)
- Accept deletion of tests/unit/ctl/test_schema_export.py (tests moved to tests/unit/sdk/)
- Move RESTRICTED_NAMESPACES into schema/export.py (removed from constants.py on stable)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
infrahub_sdk/schema/__init__.py (3)

532-552: export() fetches the full schema even when namespaces is provided; namespaces docstring is incomplete.

Two related concerns:

  1. Performance: self.fetch(branch=branch) is called without forwarding namespaces, so the full schema is always retrieved from the server and the namespace filtering is done entirely in Python. For large schemas this unnecessarily transfers and parses all data when only a subset is needed. Passing a pre-filtered namespace list to fetch() would leverage server-side filtering.

  2. Docstring: The namespaces parameter description is silent about restricted namespaces (Core, Builtin) always being excluded from the result, even when explicitly listed.

♻️ Suggested optimisation for `InfrahubSchema.export`
     async def export(
         self,
         branch: str | None = None,
         namespaces: list[str] | None = None,
     ) -> dict[str, dict[str, list[dict[str, Any]]]]:
         """Export user-defined schemas organized by namespace.
 
         Fetches all schemas from the server, filters out system types and
         restricted namespaces, and returns a dict keyed by namespace with
         ``"nodes"`` and ``"generics"`` lists of export-ready dicts.
 
         Args:
             branch: Branch to export from. Defaults to default_branch.
-            namespaces: Optional list of namespaces to include. If empty/None, all user-defined namespaces are exported.
+            namespaces: Optional list of namespaces to include. If empty/None,
+                all user-defined namespaces are exported. Restricted namespaces
+                (e.g. ``Core``, ``Builtin``) are always excluded even when
+                explicitly listed.
 
         Returns:
             Mapping of namespace to ``{"nodes": [...], "generics": [...]}``.
         """
         branch = branch or self.client.default_branch
-        schema_nodes = await self.fetch(branch=branch)
+        # Pre-filter to avoid fetching namespaces that will be excluded anyway.
+        fetch_namespaces = (
+            [ns for ns in namespaces if ns not in RESTRICTED_NAMESPACES]
+            if namespaces
+            else None
+        )
+        schema_nodes = await self.fetch(branch=branch, namespaces=fetch_namespaces)
         return self._build_export_schemas(schema_nodes=schema_nodes, namespaces=namespaces)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrahub_sdk/schema/__init__.py` around lines 532 - 552, The export()
implementation currently always calls self.fetch(branch=branch) which retrieves
the full schema; change it to forward the namespaces argument to the server by
calling self.fetch(branch=branch, namespaces=namespaces) so server-side
filtering is used, then pass the returned schema_nodes into
self._build_export_schemas as before; also update the export() docstring's
namespaces description to explicitly state that Core and Builtin (restricted)
namespaces are always excluded from the result even if listed.

796-816: Same performance and docstring gap as InfrahubSchema.export above.

self.fetch(branch=branch) is called without forwarding namespaces, and the docstring omits the restricted-namespace exclusion caveat — apply the same fix as suggested for the async variant.

♻️ Suggested fix for `InfrahubSchemaSync.export`
         branch = branch or self.client.default_branch
-        schema_nodes = self.fetch(branch=branch)
+        fetch_namespaces = (
+            [ns for ns in namespaces if ns not in RESTRICTED_NAMESPACES]
+            if namespaces
+            else None
+        )
+        schema_nodes = self.fetch(branch=branch, namespaces=fetch_namespaces)
         return self._build_export_schemas(schema_nodes=schema_nodes, namespaces=namespaces)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrahub_sdk/schema/__init__.py` around lines 796 - 816, The export method
currently calls self.fetch(branch=branch) without forwarding the namespaces
filter and its docstring doesn't mention that restricted namespaces are
excluded; update the implementation to call self.fetch(branch=branch,
namespaces=namespaces) so the namespaces parameter is honored, and update the
export docstring for InfrahubSchemaSync.export to explicitly state that
restricted/system namespaces are excluded and that the optional namespaces
argument will limit which user namespaces are exported.

123-151: _build_export_schemas silently drops explicitly requested restricted namespaces.

When a caller passes e.g. namespaces=["Core"], the method returns {} with no indication that Core was excluded because it is restricted. The export() docstring for the namespaces parameter doesn't document this behavior either. Consider adding a note to both docstrings, or emitting a warning when a caller-supplied namespace overlaps with RESTRICTED_NAMESPACES.

💡 Suggested docstring addition for `_build_export_schemas`
     """Organize fetched schemas into a per-namespace export structure.

     Filters out system types (Profile/Template), restricted namespaces,
     and optionally limits to specific namespaces.

+    Note:
+        Namespaces listed in ``RESTRICTED_NAMESPACES`` (e.g. ``Core``,
+        ``Builtin``) are always excluded, even when they appear in
+        ``namespaces``.
+
     Returns:
         Mapping of namespace to ``{"nodes": [...], "generics": [...]}``.
     """
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@infrahub_sdk/schema/__init__.py` around lines 123 - 151, The
_build_export_schemas function currently drops caller-supplied restricted
namespaces silently; update _build_export_schemas to detect intersection =
set(namespaces) & RESTRICTED_NAMESPACES and emit a clear warning (e.g.,
warnings.warn) listing the restricted namespace(s) being ignored before
proceeding, and update the export() docstring and _build_export_schemas
docstring to note that RESTRICTED_NAMESPACES are always excluded and that a
warning will be emitted when the caller requests them; reference the
_build_export_schemas function, RESTRICTED_NAMESPACES constant, and the export()
docstring so reviewers can find the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@infrahub_sdk/schema/__init__.py`:
- Around line 532-552: The export() implementation currently always calls
self.fetch(branch=branch) which retrieves the full schema; change it to forward
the namespaces argument to the server by calling self.fetch(branch=branch,
namespaces=namespaces) so server-side filtering is used, then pass the returned
schema_nodes into self._build_export_schemas as before; also update the export()
docstring's namespaces description to explicitly state that Core and Builtin
(restricted) namespaces are always excluded from the result even if listed.
- Around line 796-816: The export method currently calls
self.fetch(branch=branch) without forwarding the namespaces filter and its
docstring doesn't mention that restricted namespaces are excluded; update the
implementation to call self.fetch(branch=branch, namespaces=namespaces) so the
namespaces parameter is honored, and update the export docstring for
InfrahubSchemaSync.export to explicitly state that restricted/system namespaces
are excluded and that the optional namespaces argument will limit which user
namespaces are exported.
- Around line 123-151: The _build_export_schemas function currently drops
caller-supplied restricted namespaces silently; update _build_export_schemas to
detect intersection = set(namespaces) & RESTRICTED_NAMESPACES and emit a clear
warning (e.g., warnings.warn) listing the restricted namespace(s) being ignored
before proceeding, and update the export() docstring and _build_export_schemas
docstring to note that RESTRICTED_NAMESPACES are always excluded and that a
warning will be emitted when the caller requests them; reference the
_build_export_schemas function, RESTRICTED_NAMESPACES constant, and the export()
docstring so reviewers can find the changes.

Copy link
Contributor

@ogenstad ogenstad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added some comments to this. If required for a project we can go forward with this. However the format change I mention to the .export() method would be considered a breaking change when we do that.

In order to allow for easier refactoring it would be good with an integration test where we loaded a schema to Infrahub and then exported it again to then compare the schema that we just loaded with what we got out from infrahub using infrahubctl.


if TYPE_CHECKING:
from pytest_httpx import HTTPXMock
from infrahub_sdk.schema import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import should be above the "if TYPE_CHECKING" section. I would have expected the linters to pick up on that..

)
httpx_mock.add_response(method="GET", url="http://mock/api/schema?branch=main", json=response)

if client_type == "async":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't follow the pattern of existing tests when using the default async client. I'd reuse the same approach or perhaps use the clients fixture that return both of them.

self,
branch: str | None = None,
namespaces: list[str] | None = None,
) -> dict[str, dict[str, list[dict[str, Any]]]]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return value of dict[str, dict[str, list[dict[str, Any]]]]: makes it very hard for users to know what to expect from this method. I think this can be fine if we're talking about an initial version but moving forward I think we want an actual object being returned that matches what a schema looks like. Then on that object there could be a .to_dict() method which would deal with converting the schema into dictionary, which would do a lot of the work that schema_to_export_dict does today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type/documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feature: Ability to export the schema in YAML format using infrahubctl

2 participants