docs (k8s): Move inline YAML examples to feature files for schema and topic controllers#1650
docs (k8s): Move inline YAML examples to feature files for schema and topic controllers#1650david-yu wants to merge 3 commits intoredpanda-data:mainfrom
Conversation
…ollers Extract inline examples from k-schema-controller.adoc (full compatibility schema, schema references) and k-manage-topics.adoc (write caching topic, cleanup policy topic) into their respective feature files with proper tags so they follow the same include pattern as existing examples. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
✅ Deploy Preview for redpanda-docs-preview ready!Built without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
📝 WalkthroughWalkthroughThe changes refactor Kubernetes example documentation by extracting inline YAML examples into reusable test scenarios within Gherkin feature files. Two new schema CRD test scenarios are added—one demonstrating Avro schema compatibility levels, another demonstrating schema references—while two topic CRD scenarios are added for write caching and cleanup policy configurations. The documentation pages are updated to reference these feature file snippets via AsciiDoc includes rather than maintaining duplicate inline YAML blocks. A minor text reorganization in the schema registry ACLs documentation improves section flow. Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
modules/manage/pages/kubernetes/k-schema-controller.adoc (1)
188-193: Consider adding documentation about the prerequisite schema.The included example references a
productschema that must exist before applying this manifest:references: - name: product-schema subject: product version: 1Consider adding a note before this example explaining that the referenced schema must be created first, similar to how other documentation sections explain prerequisites.
Suggested addition
Define a schema reference using the `references` field. The reference includes the name, subject, and version of the referenced schema: +NOTE: The referenced schema (in this case, a schema with subject `product`) must already exist in the Schema Registry before applying a schema that references it. + [,yaml,indent=0] ---- include::manage:example$kubernetes/schema-crds.feature[tags=schema-references-manifest,indent=0] ----🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/manage/pages/kubernetes/k-schema-controller.adoc` around lines 188 - 193, Add a short prerequisite note before the YAML example that uses the references field to explain the referenced schema must exist beforehand; specifically mention that the example's reference (name: product-schema, subject: product, version: 1) requires the corresponding product schema to be created prior to applying this manifest so users know to create that schema first.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modules/manage/examples/kubernetes/schema-crds.feature`:
- Around line 142-175: The scenario for Schema named "order-schema" references a
non-existent dependency "product-schema" (subject: "product", version: 1),
causing sync/compatibility to fail; fix by either adding a preceding scenario
that creates the referenced schema (e.g., create a Schema resource named
"product-schema" with subject "product" and version 1 and ensure it's synced
before the order-schema scenario) or change the references block in the
"order-schema" manifest to point to an existing schema in this feature (for
example replace name/subject with the existing "product-catalog" schema and its
correct subject/version) so the reference can be resolved at test time.
In `@modules/manage/examples/kubernetes/topic-crds.feature`:
- Around line 55-79: The topic name "compacted-topic" conflicts with its
configured cleanup.policy of "delete"; fix this by either renaming the Topic
metadata.name from "compacted-topic" to a name matching the delete semantics
(e.g., "retained-topic" or "delete-policy-topic") or change
spec.additionalConfig.cleanup.policy from "delete" to "compact" so the name and
policy align; update the Scenario text and any referenced steps that mention
"compacted-topic" to the new name to keep the test consistent.
---
Nitpick comments:
In `@modules/manage/pages/kubernetes/k-schema-controller.adoc`:
- Around line 188-193: Add a short prerequisite note before the YAML example
that uses the references field to explain the referenced schema must exist
beforehand; specifically mention that the example's reference (name:
product-schema, subject: product, version: 1) requires the corresponding product
schema to be created prior to applying this manifest so users know to create
that schema first.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cf33fb5c-a968-4f11-81fb-b1b9c3a24cd6
📒 Files selected for processing (5)
modules/manage/examples/kubernetes/schema-crds.featuremodules/manage/examples/kubernetes/topic-crds.featuremodules/manage/pages/kubernetes/k-manage-topics.adocmodules/manage/pages/kubernetes/k-schema-controller.adocmodules/manage/pages/kubernetes/security/authentication/k-schema-registry-acls.adoc
| @skip:gke @skip:aks @skip:eks | ||
| Scenario: Manage order schema with references (Avro) | ||
| Given there is no schema "order-schema" in cluster "basic" | ||
| When I apply Kubernetes manifest: | ||
| """ | ||
| # tag::schema-references-manifest[] | ||
| # This manifest creates an Avro schema named "order-schema" that references another schema. | ||
| # Schema references enable modular and reusable schema components for complex data structures. | ||
| --- | ||
| apiVersion: cluster.redpanda.com/v1alpha2 | ||
| kind: Schema | ||
| metadata: | ||
| name: order-schema | ||
| namespace: redpanda | ||
| spec: | ||
| cluster: | ||
| clusterRef: | ||
| name: basic | ||
| references: | ||
| - name: product-schema | ||
| subject: product | ||
| version: 1 | ||
| text: | | ||
| { | ||
| "type": "record", | ||
| "name": "Order", | ||
| "fields": [ | ||
| { "name": "product", "type": "Product" } | ||
| ] | ||
| } | ||
| # end::schema-references-manifest[] | ||
| """ | ||
| And schema "order-schema" is successfully synced | ||
| Then I should be able to check compatibility against "order-schema" in cluster "basic" |
There was a problem hiding this comment.
❓ Verification inconclusive
Script executed:
#!/bin/bash
# Search for any existing schema definition with subject "product" in the feature files
rg -n "subject.*product" --type-add 'feature:*.feature' -t feature
# Check if there's a test setup that creates the product schema
rg -n "product-schema|subject: product" modules/manage/examples/kubernetes/Repository: redpanda-data/docs
Repository: redpanda-data/docs
Exit code: 0
stdout:
modules/manage/examples/kubernetes/schema-crds.feature:162: subject: product
modules/manage/examples/kubernetes/schema-crds.feature:161: - name: product-schema
modules/manage/examples/kubernetes/schema-crds.feature:162: subject: product
Script executed:
# Find the exact subject name of the product-catalog schema
rg -A 20 "product-catalog" modules/manage/examples/kubernetes/schema-crds.feature | head -40
# Search the entire repository for any schema with subject "product"
rg -n "subject.*product" --type-add 'feature:*.feature' -t feature
# Check if there's any test setup or Before hooks
rg -n "Background:|Before" modules/manage/examples/kubernetes/schema-crds.feature
# Look for other feature files that might set up schemas
fd -name "*.feature" modules/manage/examples/kubernetes/Repository: redpanda-data/docs
Repository: redpanda-data/docs
Exit code: 2
stdout:
Given there is no schema "product-catalog" in cluster "basic"
When I apply Kubernetes manifest:
"""
# tag::product-catalog-protobuf-schema-manifest[]
# This manifest creates a Protobuf schema named "product-catalog" in the "basic" cluster.
# The schema defines a message "Product" with fields for product ID, name, price, and category.
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Schema
metadata:
name: product-catalog
spec:
cluster:
clusterRef:
name: basic
schemaType: protobuf
compatibilityLevel: Backward
text: |
syntax = "proto3";
message Product {
int32 product_id = 1;
string product_name = 2;
double price = 3;
string category = 4;
}
# end::product-catalog-protobuf-schema-manifest[]
"""
And schema "product-catalog" is successfully synced
Then I should be able to check compatibility against "product-catalog" in cluster "basic"
`@skip`:gke `@skip`:aks `@skip`:eks
Scenario: Manage order event schema (JSON)
Given there is no schema "order-event" in cluster "basic"
When I apply Kubernetes manifest:
"""
# tag::order-event-json-schema-manifest[]
# This manifest creates a JSON schema named "order-event" in the "basic" cluster.
# The schema requires an "order_id" (string) and a "total" (number) field, with no additional properties allowed.
---
modules/manage/examples/kubernetes/schema-crds.feature:162: subject: product
3: Background: Cluster available
stderr:
error: Found argument '-n' which wasn't expected, or isn't valid in this context
If you tried to supply '-n' as a value rather than a flag, use '-- -n'
Usage: fd [OPTIONS] [pattern] [path]...
For more information try '--help'
Script executed:
# Read the full content of the feature file to understand structure and dependencies
cat -n modules/manage/examples/kubernetes/schema-crds.feature | grep -A 5 -B 5 "product-schema\|subject: product"
# Check if there's any documentation on how subjects are assigned
rg -i "subject.*derived\|schema.*subject\|metadata.*name" modules/manage/examples/kubernetes/ -A 2
# List all schemas defined in this feature file to see the pattern
rg "kind: Schema" -A 15 modules/manage/examples/kubernetes/schema-crds.feature | grep -E "name:|subject:|kind: Schema"Repository: redpanda-data/docs
Repository: redpanda-data/docs
Exit code: 0
stdout:
156 spec:
157 cluster:
158 clusterRef:
159 name: basic
160 references:
161 - name: product-schema
162 subject: product
163 version: 1
164 text: |
165 {
166 "type": "record",
167 "name": "Order",
kind: Schema
name: customer-profile
name: basic
kind: Schema
name: product-catalog
name: basic
kind: Schema
name: order-event
name: basic
kind: Schema
name: fully-compatible-schema
name: basic
kind: Schema
name: order-schema
name: basic
- name: product-schema
subject: product
Script executed:
# Check the Background section and overall structure of the feature file
head -20 modules/manage/examples/kubernetes/schema-crds.feature
# Search the entire repository for "product-schema" to see if it's created elsewhere
rg -i "product-schema" --type-add 'feature:*.feature' -t feature -B 3 -A 3
# Check for any reference to schema references in documentation or comments
rg -i "schema.*reference|reference.*schema" modules/manage/examples/kubernetes/ -B 2 -A 2Repository: redpanda-data/docs
Repository: redpanda-data/docs
Exit code: 0
stdout:
`@cluster`:basic `@variant`:vectorized
Feature: Schema CRDs
Background: Cluster available
Given cluster "basic" is available
`@skip`:gke `@skip`:aks `@skip`:eks
Scenario: Manage customer profile schema (Avro)
Given there is no schema "customer-profile" in cluster "basic"
When I apply Kubernetes manifest:
"""
# tag::customer-profile-avro-schema-manifest[]
# This manifest creates an Avro schema named "customer-profile" in the "basic" cluster.
# The schema defines a record with fields for customer ID, name, and age.
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Schema
metadata:
name: customer-profile
spec:
cluster:
modules/manage/examples/kubernetes/schema-crds.feature- clusterRef:
modules/manage/examples/kubernetes/schema-crds.feature- name: basic
modules/manage/examples/kubernetes/schema-crds.feature- references:
modules/manage/examples/kubernetes/schema-crds.feature: - name: product-schema
modules/manage/examples/kubernetes/schema-crds.feature- subject: product
modules/manage/examples/kubernetes/schema-crds.feature- version: 1
modules/manage/examples/kubernetes/schema-crds.feature- text: |
modules/manage/examples/kubernetes/schema-crds.feature-
modules/manage/examples/kubernetes/schema-crds.feature- `@skip`:gke `@skip`:aks `@skip`:eks
modules/manage/examples/kubernetes/schema-crds.feature: Scenario: Manage order schema with references (Avro)
modules/manage/examples/kubernetes/schema-crds.feature- Given there is no schema "order-schema" in cluster "basic"
modules/manage/examples/kubernetes/schema-crds.feature- When I apply Kubernetes manifest:
modules/manage/examples/kubernetes/schema-crds.feature- """
modules/manage/examples/kubernetes/schema-crds.feature:# tag::schema-references-manifest[]
modules/manage/examples/kubernetes/schema-crds.feature: # This manifest creates an Avro schema named "order-schema" that references another schema.
modules/manage/examples/kubernetes/schema-crds.feature: # Schema references enable modular and reusable schema components for complex data structures.
modules/manage/examples/kubernetes/schema-crds.feature- ---
modules/manage/examples/kubernetes/schema-crds.feature- apiVersion: cluster.redpanda.com/v1alpha2
--
modules/manage/examples/kubernetes/schema-crds.feature- ]
modules/manage/examples/kubernetes/schema-crds.feature- }
modules/manage/examples/kubernetes/schema-crds.feature:# end::schema-references-manifest[]
modules/manage/examples/kubernetes/schema-crds.feature- """
modules/manage/examples/kubernetes/schema-crds.feature- And schema "order-schema" is successfully synced
Test scenario references a non-existent schema dependency.
The order-schema scenario declares a reference to product-schema with subject: product and version: 1:
references:
- name: product-schema
subject: product
version: 1However, no schema named product-schema (or with subject product) is created anywhere in this feature file or the test suite. The scenario includes assertions expecting successful sync and compatibility checks, which will fail because the referenced schema cannot be resolved.
Add a preceding scenario that creates a product-schema, or adjust the reference to point to an existing schema defined in this file (e.g., product-catalog).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/manage/examples/kubernetes/schema-crds.feature` around lines 142 -
175, The scenario for Schema named "order-schema" references a non-existent
dependency "product-schema" (subject: "product", version: 1), causing
sync/compatibility to fail; fix by either adding a preceding scenario that
creates the referenced schema (e.g., create a Schema resource named
"product-schema" with subject "product" and version 1 and ensure it's synced
before the order-schema scenario) or change the references block in the
"order-schema" manifest to point to an existing schema in this feature (for
example replace name/subject with the existing "product-catalog" schema and its
correct subject/version) so the reference can be resolved at test time.
| @skip:gke @skip:aks @skip:eks | ||
| Scenario: Manage topic with cleanup policy | ||
| Given there is no topic "compacted-topic" in cluster "basic" | ||
| When I apply Kubernetes manifest: | ||
| """ | ||
| # tag::cleanup-policy-topic-example[] | ||
| # This manifest creates a topic with the cleanup policy set to "delete". | ||
| # The cleanup policy determines how partition log files are managed when they reach a certain size. | ||
| --- | ||
| apiVersion: cluster.redpanda.com/v1alpha2 | ||
| kind: Topic | ||
| metadata: | ||
| name: compacted-topic | ||
| spec: | ||
| cluster: | ||
| clusterRef: | ||
| name: basic | ||
| partitions: 3 | ||
| replicationFactor: 1 | ||
| additionalConfig: | ||
| cleanup.policy: "delete" | ||
| # end::cleanup-policy-topic-example[] | ||
| """ | ||
| And topic "compacted-topic" is successfully synced | ||
| Then I should be able to produce and consume from "compacted-topic" in cluster "basic" |
There was a problem hiding this comment.
Topic name compacted-topic is misleading given cleanup.policy: "delete".
The scenario creates a topic named compacted-topic but sets cleanup.policy: "delete". This is confusing because:
compacted-topicsuggests log compaction (cleanup.policy: "compact")- The actual policy is
"delete", which deletes data based on retention settings
Consider renaming the topic to something like retained-topic or delete-policy-topic to match its configuration, or change the policy to "compact" if the intent was to demonstrate compaction.
Proposed fix (option 1: rename topic)
`@skip`:gke `@skip`:aks `@skip`:eks
Scenario: Manage topic with cleanup policy
- Given there is no topic "compacted-topic" in cluster "basic"
+ Given there is no topic "delete-policy-topic" in cluster "basic"
When I apply Kubernetes manifest:
"""
# tag::cleanup-policy-topic-example[]
# This manifest creates a topic with the cleanup policy set to "delete".
# The cleanup policy determines how partition log files are managed when they reach a certain size.
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Topic
metadata:
- name: compacted-topic
+ name: delete-policy-topic
spec:
cluster:
clusterRef:
name: basic
partitions: 3
replicationFactor: 1
additionalConfig:
cleanup.policy: "delete"
# end::cleanup-policy-topic-example[]
"""
- And topic "compacted-topic" is successfully synced
- Then I should be able to produce and consume from "compacted-topic" in cluster "basic"
+ And topic "delete-policy-topic" is successfully synced
+ Then I should be able to produce and consume from "delete-policy-topic" in cluster "basic"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @skip:gke @skip:aks @skip:eks | |
| Scenario: Manage topic with cleanup policy | |
| Given there is no topic "compacted-topic" in cluster "basic" | |
| When I apply Kubernetes manifest: | |
| """ | |
| # tag::cleanup-policy-topic-example[] | |
| # This manifest creates a topic with the cleanup policy set to "delete". | |
| # The cleanup policy determines how partition log files are managed when they reach a certain size. | |
| --- | |
| apiVersion: cluster.redpanda.com/v1alpha2 | |
| kind: Topic | |
| metadata: | |
| name: compacted-topic | |
| spec: | |
| cluster: | |
| clusterRef: | |
| name: basic | |
| partitions: 3 | |
| replicationFactor: 1 | |
| additionalConfig: | |
| cleanup.policy: "delete" | |
| # end::cleanup-policy-topic-example[] | |
| """ | |
| And topic "compacted-topic" is successfully synced | |
| Then I should be able to produce and consume from "compacted-topic" in cluster "basic" | |
| `@skip`:gke `@skip`:aks `@skip`:eks | |
| Scenario: Manage topic with cleanup policy | |
| Given there is no topic "delete-policy-topic" in cluster "basic" | |
| When I apply Kubernetes manifest: | |
| """ | |
| # tag::cleanup-policy-topic-example[] | |
| # This manifest creates a topic with the cleanup policy set to "delete". | |
| # The cleanup policy determines how partition log files are managed when they reach a certain size. | |
| --- | |
| apiVersion: cluster.redpanda.com/v1alpha2 | |
| kind: Topic | |
| metadata: | |
| name: delete-policy-topic | |
| spec: | |
| cluster: | |
| clusterRef: | |
| name: basic | |
| partitions: 3 | |
| replicationFactor: 1 | |
| additionalConfig: | |
| cleanup.policy: "delete" | |
| # end::cleanup-policy-topic-example[] | |
| """ | |
| And topic "delete-policy-topic" is successfully synced | |
| Then I should be able to produce and consume from "delete-policy-topic" in cluster "basic" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/manage/examples/kubernetes/topic-crds.feature` around lines 55 - 79,
The topic name "compacted-topic" conflicts with its configured cleanup.policy of
"delete"; fix this by either renaming the Topic metadata.name from
"compacted-topic" to a name matching the delete semantics (e.g.,
"retained-topic" or "delete-policy-topic") or change
spec.additionalConfig.cleanup.policy from "delete" to "compact" so the name and
policy align; update the Scenario text and any referenced steps that mention
"compacted-topic" to the new name to keep the test consistent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
k-schema-controller.adoc(full compatibility schema, schema references) intoschema-crds.featurewith proper tags and include directivesk-manage-topics.adoc(write caching topic, cleanup policy topic) intotopic-crds.featurewith proper tags and include directivesTest plan
🤖 Generated with Claude Code