diff --git a/.gitignore b/.gitignore index b2d6de3..ec80eba 100644 --- a/.gitignore +++ b/.gitignore @@ -18,3 +18,4 @@ npm-debug.log* yarn-debug.log* yarn-error.log* +.idea/ diff --git a/BACKLOG.md b/BACKLOG.md new file mode 100644 index 0000000..4d1d64c --- /dev/null +++ b/BACKLOG.md @@ -0,0 +1,51 @@ +# BACKLOG.md - Open Control Plane Documentation Project + +## Backlog + + +## Next +- [x] More engaging ansprache (improve language/tone) +- [x] Farbverlauf (gradient styling) [ott] +- [ ] Related? Crossplane usw (add related projects section) [ott] +- [ ] Are CRD reflected? (stretch goal) [ott] +- [ ] Teal farbspektrum und content aware coloring [ott] +- [ ] Maybe in Browser with WASM? (stretch) +- [x] Contributing: Design decisions (ADRs et al) +- [ ] Index: Was möchte ich machen (What do I want to do) +- [ ] Onboarding API - why/what it helps for +- [ ] Johannes change icons +- [ ] End user getting started: MCP Server - max b to publish +- [ ] End user getting started: Edit docs +- [ ] Operators: getting started (currently empty) +- [ ] Operators: Happy Path docs +- [ ] Contributing: Wie mach ich was (How do I do what) - Extend - Fundamental - How to + +## In Progress + +## Review + +## Done + +- [x] Theme-aware section backgrounds (CSS vars instead of hardcoded dark) +- [x] Navbar scroll transparency effect (transparent at top, solid on scroll) +- [x] Footer logo sizing (EU + NeoNephos enlarged) +- [x] Light-mode axolotl drop shadow (0.25 opacity + gradient blob restored) +- [x] Content max-width alignment (1152px hero/features/footer, 1152px navbar) +- [x] Swizzled footer — 3-row layout (EU banner, copyright, legal links) +- [x] NeoNephos SVG logo in footer +- [x] Feature card mouse-tracking glow effect +- [x] Axolotl drop shadow (dark mode) +- [x] Favicon update (axolotl mascot, multi-size .ico) +- [x] SVG graphics/icons (co_axolotl.svg vector trace, 160KB) +- [x] Navbar logo — mirrored axolotl facing left + transparent background variant +- [x] End user getting started: Prerequisites (platform installed) +- [x] End user getting started: authentication/authorization +- [x] End user getting started: pictures (hierarchy diagram) +- [x] Double check no openMCP → OpenControlPlane (cleanup legacy naming) +- [x] SIGs mit rein (include SIGs) +- [x] Community seite - how to participate +- [x] Contributing: Wie kann ich partizipieren (How can I participate) - an der community + +--- + +*This board tracks granular tasks for the Open Control Plane documentation project. For cross-project status, see the main Projects.md overview.* diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..d241186 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,96 @@ +# CLAUDE.md - OpenControlPlane Documentation + +## About + +**OpenControlPlane** provides Infrastructure- and Configuration-as-Data capabilities via Kubernetes Resource Model APIs. Part of ApeiroRA (IPCEI-CIS European cloud initiative). + +> **Note:** Legacy "openMCP" references should be updated to "OpenControlPlane" + +## Tech Stack + +- **Docusaurus 3** (React-based docs) +- **TypeScript** + **MDX** +- **Mermaid** diagrams +- **Node.js 18+** + +## Brand Colors + +**Teal Spectrum** (primary brand colors): +- **Teal 2** (lightest): `#C2FCEE` | RGB 194/252/238 | Pantone 317 C +- **Teal 4** (bright): `#2CE0BF` | RGB 44/224/191 | Pantone 3255 C +- **Teal 6** (medium): `#049F9A` | RGB 4/159/154 | Pantone 2233 C +- **Teal 7** (darker): `#07838F` | RGB 7/131/143 | Pantone 2235 C +- **Teal 10** (dark): `#02414C` | RGB 2/65/76 | Pantone 2215 C +- **Teal 11** (darkest): `#012931` | RGB 1/41/49 | Pantone 2189 C + +**Usage:** +- Use lighter teals (2, 4) for backgrounds and highlights +- Medium teals (6, 7) for primary actions and branding +- Darker teals (10, 11) for text and contrast elements +- Consider gradients between adjacent teal shades + +## Structure + +``` +docs/ +├── about/ # Project overview, concepts, design +├── users/ # End-user guides +├── operators/ # Platform operator docs +└── developers/ # Contributor/dev docs +adrs/ # Architectural Decision Records +templates/adr.* # ADR template + script +``` + +## Commands + +```bash +npm start # Dev server (localhost:3000) +npm run build # Production build +npm run new-adr # Create new ADR +npm run typecheck # TypeScript validation +``` + +## Core Concepts + +- **Control Plane** - Kubernetes API server as a service +- **Service/Cluster Providers** - Entities offering services/clusters +- **Platform Service** - Software/infrastructure via control plane +- **OCM** - Open Component Model integration + +## Workflow + +1. **Content** → Add to appropriate `docs/` subfolder +2. **ADRs** → Use `npm run new-adr` +3. **Test** → `npm run build` + `npm run typecheck` +4. **Deploy** → GitHub Pages on `main` branch push + +## Writing Style + +**Friendly but precise** - Be welcoming to newcomers while technically accurate for experts. + +**Concise** - Respect readers' time. Get to the point quickly. + +**Structure patterns:** +- Lead with **what** and **why** before **how** +- Use bullets and short paragraphs +- Include examples for every concept +- Link to related docs liberally + +**Voice:** +- "You can..." not "One might..." +- "This feature helps you..." not "This feature provides the capability to..." +- Active voice: "Install the CLI" not "The CLI should be installed" + +**Technical precision:** +- Exact command syntax with copy-paste examples +- Specify prerequisites clearly +- Note version requirements +- Include common error solutions + +## Sidebars + +Auto-generated from directory structure. Control order with `sidebar_position` frontmatter. + +--- + +*Update BACKLOG.md when tasks move. Test locally before committing.* \ No newline at end of file diff --git a/COLOR_SCHEME_REFERENCE.md b/COLOR_SCHEME_REFERENCE.md new file mode 100644 index 0000000..d80fca4 --- /dev/null +++ b/COLOR_SCHEME_REFERENCE.md @@ -0,0 +1,50 @@ +# Role-Based Color Scheme - COMPLETE IMPLEMENTATION + +## CSS Variables (Lines 39-57 in custom.css) + +```css +/* Role-based Teal Spectrum */ +--teal-2: #C2FCEE; +--teal-4: #2CE0BF; +--teal-6: #049F9A; +--teal-7: #07838F; ← End User Primary +--teal-10: #02414C; ← Operator Primary +--teal-11: #012931; ← Contributor Primary + +/* Role Colors */ +--role-enduser-primary: var(--teal-7); #07838F +--role-enduser-secondary: var(--teal-10); #02414C +--role-operator-primary: var(--teal-10); #02414C +--role-operator-secondary: var(--teal-11); #012931 +--role-contributor-primary: transparent; +--role-contributor-border: var(--teal-11); #012931 +``` + +## Where Colors Appear + +### 1. Hero Section Buttons (Landing Page) +- **"Get Started"** → Teal 7 (#07838F), hover: Teal 10 +- **"Run Your Platform"** → Teal 10 (#02414C), hover: Teal 11 +- **"Build Together"** → Transparent, hover: subtle tint + +### 2. Navbar Links (Top Navigation) +- Hover/Active states show role colors +- Bottom border accent appears on hover +- Colors: Teal 7 (users), Teal 10 (operators), Teal 11 (developers) + +### 3. Sidebar (Documentation Sections) +- Left border shows section color +- Active menu items highlighted in role color +- Hover states use lighter tint of role color +- Different color per section: userDocs, operatorDocs, developerDocs + +## Color Gradient Philosophy + +Light → Dark = Beginner → Advanced + +- **Teal 7** (lighter) = End users, getting started, welcoming +- **Teal 10** (medium dark) = Operators, platform management, professional +- **Teal 11** (darkest) = Contributors, developers, technical depth +- **Transparent** = Open invitation to contribute + +This creates a visual progression through the documentation that mirrors the user journey. diff --git a/README.md b/README.md index dbd488d..a77db47 100644 --- a/README.md +++ b/README.md @@ -48,4 +48,4 @@ We as members, contributors, and leaders pledge to make participation in our com ## Licensing -Copyright 2025 SAP SE or an SAP affiliate company and openMCP contributors. Please see our [LICENSE](LICENSE) for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available [via the REUSE tool](https://api.reuse.software/info/github.com/openmcp-project/docs). +Copyright 2025 SAP SE or an SAP affiliate company and OpenControlPlane contributors. Please see our [LICENSE](LICENSE) for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available [via the REUSE tool](https://api.reuse.software/info/github.com/openmcp-project/docs). diff --git a/adrs/2025-07-17-local-dns.md b/adrs/2025-07-17-local-dns.md index 7aab839..c8fa2df 100644 --- a/adrs/2025-07-17-local-dns.md +++ b/adrs/2025-07-17-local-dns.md @@ -7,7 +7,7 @@ authors: ## Context and Problem Statement -When creating services on a Kubernetes cluster, they shall be accessible from other clusters within an openMCP landscape. To achieve this a `Gateway` and `HTTPRoute` resource is created. The Gateway controller will assign a routable IP address to the Gateway resource. The HTTPRoute resource will then be used to route traffic to the service. +When creating services on a Kubernetes cluster, they shall be accessible from other clusters within an OpenControlPlane landscape. To achieve this a `Gateway` and `HTTPRoute` resource is created. The Gateway controller will assign a routable IP address to the Gateway resource. The HTTPRoute resource will then be used to route traffic to the service. ```yaml apiVersion: gateway.networking.k8s.io/v1beta1 @@ -49,18 +49,18 @@ spec: port: 80 ``` -The problem is that the service is only reachable via the IP address and not via the hostname. This is because the DNS server in the openMCP landscape does not know about the service and therefore cannot resolve the hostname to the IP address. The Kubernetes dns service only knows how to route to service within the same cluster. On an openMCP landscape however, services must be reachable from other clusters by stable host names. +The problem is that the service is only reachable via the IP address and not via the hostname. This is because the DNS server in the OpenControlPlane landscape does not know about the service and therefore cannot resolve the hostname to the IP address. The Kubernetes dns service only knows how to route to service within the same cluster. On an OpenControlPlane landscape however, services must be reachable from other clusters by stable host names. -Therefore there is a need for a openMCP DNS solution that makes these host names resolvable on all clusters that ar part of the openMCP landscape. +Therefore there is a need for an OpenControlPlane DNS solution that makes these host names resolvable on all clusters that are part of the OpenControlPlane landscape. -## openMCP DNS System Service +## OpenControlPlane DNS System Service -To solve the stated problem, a `openMCP DNS System Service` is needed. This system service will be responsible for the following tasks: +To solve the stated problem, an `OpenControlPlane DNS System Service` is needed. This system service will be responsible for the following tasks: -* Deploy a central openMCP DNS server in the openMCP landscape. This DNS server will be used to resolve all host names in the openMCP base domain `openmcp.cluster`. -* For each cluster in the openMCP landscape, the system service will configure the Kubernetes local DNS service to forward DNS queries for the openMCP base domain to the central openMCP DNS server. This will ensure that all clusters can resolve host names in the openMCP base domain. -* For each Gateway or Ingress resource, the system service will create a DNS entry in the central openMCP DNS server. The DNS entry will map the hostname to the IP address of the Gateway or Ingress resource. -* For each cluster in the openMCP landscape, the system service will annotate the `Cluster` resource with the openMCP base domain. This will help service providers to configure their services to use the openMCP base domain for their host names. +* Deploy a central OpenControlPlane DNS server in the OpenControlPlane landscape. This DNS server will be used to resolve all host names in the OpenControlPlane base domain `openmcp.cluster`. +* For each cluster in the OpenControlPlane landscape, the system service will configure the Kubernetes local DNS service to forward DNS queries for the OpenControlPlane base domain to the central OpenControlPlane DNS server. This will ensure that all clusters can resolve host names in the OpenControlPlane base domain. +* For each Gateway or Ingress resource, the system service will create a DNS entry in the central OpenControlPlane DNS server. The DNS entry will map the hostname to the IP address of the Gateway or Ingress resource. +* For each cluster in the OpenControlPlane landscape, the system service will annotate the `Cluster` resource with the OpenControlPlane base domain. This will help service providers to configure their services to use the OpenControlPlane base domain for their host names. This shall be completely transparent to a service provider. The service provider only needs to create a Gateway or Ingress resource and the DNS entry will be created automatically. @@ -75,7 +75,7 @@ For the example implementation, following components are used: The `DNS Provider` is running on the platform cluster. The `DNS Provider` is deploying an `ETCD` and the cental `CoreDNS` instance on the platform cluster. The `ETCD` instance is used to store the DNS entries. The `CoreDNS` is reading the DNS entries from the `ETCD` instance and is used to resolve the host names. ```yaml -# CoreDNS configuration to read DNS entries from ETCD for the openMCP base domain `openmcp.cluster` +# CoreDNS configuration to read DNS entries from ETCD for the OpenControlPlane base domain `openmcp.cluster` - name: etcd parameters: openmcp.cluster configBlock: |- @@ -95,7 +95,7 @@ containers: - --source=ingress - --source=gateway-httproute - --provider=coredns - - --domain-filter=platform.openmcp.cluster # only detect hostnames in the openMCP base domain belonging to the cluster + - --domain-filter=platform.openmcp.cluster # only detect hostnames in the OpenControlPlane base domain belonging to the cluster env: - name: ETCD_URLS value: http://172.18.200.2:2379 # external routable IP of the ETCD instance running on the platform cluster @@ -123,7 +123,7 @@ subgraph Platform Cluster end ``` -The `DNS Provider` is updating the `CoreDNS` configuration on the platform cluster and on all other clusters. The `CoreDNS` configuration is updated to forward DNS queries for the openMCP base domain to the central `CoreDNS` instance running on the platform cluster. This will ensure that all clusters can resolve host names in the openMCP base domain. +The `DNS Provider` is updating the `CoreDNS` configuration on the platform cluster and on all other clusters. The `CoreDNS` configuration is updated to forward DNS queries for the OpenControlPlane base domain to the central `CoreDNS` instance running on the platform cluster. This will ensure that all clusters can resolve host names in the OpenControlPlane base domain. ```corefile openmcp.cluster { @@ -146,4 +146,4 @@ subgraph Platform Cluster end ``` -Then on any pod in any cluster of the openMCP landscape, the hostname can be resolved to the IP address of the Gateway or Ingress resource. +Then on any pod in any cluster of the OpenControlPlane landscape, the hostname can be resolved to the IP address of the Gateway or Ingress resource. diff --git a/adrs/2025-08-12-mcp-namespace-strategy.md b/adrs/2025-08-12-mcp-namespace-strategy.md index 9e0cfe6..5c1a587 100644 --- a/adrs/2025-08-12-mcp-namespace-strategy.md +++ b/adrs/2025-08-12-mcp-namespace-strategy.md @@ -7,7 +7,7 @@ authors: ## Context and Problem Statement -In the openMCP platform, we need to determine how to organize resources in the Platform Cluster that belong to Managed Control Planes (MCPs). Each MCP represents a separate tenant or customer environment that needs to be isolated and managed independently. The key question is: Should every MCP on the Platform Cluster have its own Namespace to ensure proper isolation, resource management, and security boundaries? +In the OpenControlPlane platform, we need to determine how to organize resources in the Platform Cluster that belong to Managed Control Planes (MCPs). Each MCP represents a separate tenant or customer environment that needs to be isolated and managed independently. The key question is: Should every MCP on the Platform Cluster have its own Namespace to ensure proper isolation, resource management, and security boundaries? Without proper namespace isolation, MCPs could interfere with each other, leading to security vulnerabilities, resource conflicts, and operational complexity. diff --git a/docs/about/concepts/cluster-provider.md b/docs/about/concepts/cluster-provider.md deleted file mode 100644 index 73c72bb..0000000 --- a/docs/about/concepts/cluster-provider.md +++ /dev/null @@ -1,3 +0,0 @@ -# Cluster Providers - -Cluster providers are responsible for the dynamic creation, modification, and deletion of Kubernetes clusters in an openMCP environment. They conceal certain cluster technologies (e.g., [Gardener](https://gardener.cloud/) and [Kubernetes-in-Docker](https://kind.sigs.k8s.io/)) behind a homogeneous interface. This allows operators to install an openMCP system in different environments and on various infrastructure providers without having to adjust the other components of the system accordingly. diff --git a/docs/about/concepts/managed-control-plane.md b/docs/about/concepts/managed-control-plane.md deleted file mode 100644 index e58c9f3..0000000 --- a/docs/about/concepts/managed-control-plane.md +++ /dev/null @@ -1,3 +0,0 @@ -# Managed Control Planes (MCPs) - -Managed Control Planes (MCPs) are at the heart of openMCP. Simply put, they are lightweight Kubernetes clusters that store the desired state and current status of various resources. All resources follow the Kubernetes Resource Model (KRM), allowing infrastructure resources, deployments, etc., to be managed with common Kubernetes tools like kubectl, kustomize, Helm, Flux, ArgoCD, and so on. diff --git a/docs/about/concepts/platform-service.md b/docs/about/concepts/platform-service.md deleted file mode 100644 index aea5020..0000000 --- a/docs/about/concepts/platform-service.md +++ /dev/null @@ -1,3 +0,0 @@ -# Platform Services - -Platform services add functionality to an openMCP environment (not MCPs). Examples include network services (Gateway API, Ingress), audit logs, billing, grouping of MCPs, and system-wide policies. They are installed and configured by the platform operator and apply to the entire system. diff --git a/docs/about/concepts/service-provider.md b/docs/about/concepts/service-provider.md deleted file mode 100644 index 5bfcf14..0000000 --- a/docs/about/concepts/service-provider.md +++ /dev/null @@ -1,3 +0,0 @@ -# Service Providers - -Without service providers, MCPs are of little use. They add functionality such as cloud provider APIs, GitOps, policies, or backup and restore to MCPs. The operators of an openMCP environment decide which service providers are available to end users. The end users can then activate them for their MCPs. diff --git a/docs/about/ecosystem.md b/docs/about/ecosystem.md deleted file mode 100644 index d2b53a6..0000000 --- a/docs/about/ecosystem.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -sidebar_position: 2 ---- - -# Ecosystem - -openMCP is a platform built on top of amazing open-source projects. The major ones are listed below. - -## Kubernetes - -"[Kubernetes](https://kubernetes.io/), also known as K8s, is an open source system for automating deployment, scaling, and management of containerized applications."[^kubernetes] openMCP not only runs on Kubernetes but also uses the Kubernetes API as the central interface for all human users as well as integrations and automations. The components of openMCP extend the Kubernetes API through [Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), enabling the use of Kubernetes for configuring more than just compute, storage, and networking resources. - -## Gardener - -[Gardener](https://gardener.cloud/) delivers "fully-managed clusters at scale everywhere with your own Gardener installation".[^gardener] Supported infrastructure includes AWS, Azure, and GCP but also OpenStack, [IronCore](https://github.com/ironcore-dev/gardener-extension-provider-ironcore), [Hetzner Cloud](https://github.com/23technologies/gardener-extension-provider-hcloud), and others. Like openMCP, Gardener is a Kubernetes extension and "adheres to the same principles for resiliency, manageability, observability and high automation by design".[^gardener] openMCP can use Gardener as a [cluster provider](concepts/cluster-provider.md). - -## Open Component Model - -"The [Open Component Model (OCM)](https://ocm.software/) is an open standard that enables teams to describe software artifacts and their lifecycle metadata in a consistent, technology-agnostic way."[^ocm] openMCP uses the OCM to package components and their dependencies, ensuring a reliable delivery to any (even air-gapped) environment. - -## Crossplane - -"[Crossplane](https://www.crossplane.io/) is an open source, CNCF project built on the foundation of Kubernetes to orchestrate anything."[^crossplane] It makes use of providers to connect to various cloud APIs – a concept that is known from Terraform/OpenTofu. Enabling Crossplane as a [service provider](concepts/service-provider.md) in openMCP allows end-users to make use of the rich ecosystem of Crossplane providers. - -## Flux - -"[Flux](https://fluxcd.io/) is a set of continuous and progressive delivery solutions for Kubernetes that are open and extensible."[^fluxcd] When enabled in an openMCP environment, users can benefit from [GitOps](https://www.cncf.io/blog/2025/06/09/gitops-in-2025-from-old-school-updates-to-the-modern-way/) features as part of their [MCPs](concepts/managed-control-plane.md). - -## Kyverno - -"The [Kyverno](https://kyverno.io/) project provides a comprehensive set of tools to manage the complete Policy-as-Code (PaC) lifecycle for Kubernetes and other cloud native environments."[^kyverno] With Kyverno, both team-internal and organization-wide policies can be defined to establish minimum security standards for managed cloud resources or to represent other corporate standards. - -## External Secrets - -"External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, [...] and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret."[^externalsecrets] In conjunction with other services like Crossplane and Flux, users can define their landscapes as templates and deploy them without code duplication. The External Secrets Operator can not only import secrets into an MCP but also push secrets generated in the MCP to other systems. - -## Landscaper - -"Landscaper provides the means to describe, install and maintain cloud-native landscapes. It allows you to express an order of building blocks, connect output with input data and ultimately, bring your landscape to live."[^landscaper] Operators can activate Landscaper as a service provider in their openMCP environment to ease the rollout of more complex software products for their users. - -[^kubernetes]: https://kubernetes.io/ -[^gardener]: https://gardener.cloud/ -[^ocm]: https://ocm.software/docs/overview/about/ -[^crossplane]: https://www.crossplane.io/ -[^fluxcd]: https://fluxcd.io/ -[^kyverno]: https://kyverno.io/ -[^externalsecrets]: https://external-secrets.io/latest/ -[^landscaper]: https://github.com/gardener/landscaper/blob/master/README.md diff --git a/docs/about/project.md b/docs/about/project.md deleted file mode 100644 index 2d5082a..0000000 --- a/docs/about/project.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -slug: / -sidebar_position: 1 ---- - -# About openMCP - -👋 Welcome to the documentation of openMCP. We are part of [ApeiroRA](https://apeirora.eu/content/projects/) which is an Important Project of Common European Interest - Next Generation Cloud Infrastructures and Services (IPCEI-CIS). - -## 🌐 ApeiroRA? - -ApeiroRA is a reference blueprint for an open, flexible, secure, and compliant next-generation cloud-edge continuum and therefore a key contribution to IPCEI-CIS. At a high level, the projects of ApeiroRA allow users to provider-agnostically fetch, request and consume services, and for service providers to describe, offer and provision their services. - -By being open source, ApeiroRA provides a cross-border spillover effect, solidifying the foundation and future of the project. - -Learn more about ApeiroRA by checking out the official website at [https://apeirora.eu/](https://apeirora.eu/). - -## 🤝 openMCP and ApeiroRA - -The Open Managed Control Plane (openMCP) enables extensible Infrastructure- and Configuration-as-Data capabilities as a Service. Based on the Kubernetes Resource Model, all resources in the cloud-edge continuum with ApeiroRA are accessible and managed via a declarative API and corresponding controllers and operators. Together with the controller which understand OCM and declarative deployment orchestrators, consumers can subscribe to a product release-train of software producers and implement an automated, GitOps-driven deployment workflow at the edges. - -## 👥 Get Involved - -We welcome contributions of all kinds, from code to documentation, testing, and design. If you're interested in getting involved, check out our [open issues](https://github.com/issues?q=is%3Aopen+is%3Aissue+org%3Aopenmcp-project+archived%3Afalse+). - -## 🌈 Code of Conduct - -To facilitate a nice environment for all, check out [our Code of Conduct](https://github.com/openmcp-project/.github/blob/main/CODE_OF_CONDUCT.md). - -## 🪙 Funding - -![Bundesministerium für Wirtschaft und Energie (BMWE)-EU funding logo](/img/BMWK-EU.png) diff --git a/docs/community/00-overview.md b/docs/community/00-overview.md new file mode 100644 index 0000000..5ea9d18 --- /dev/null +++ b/docs/community/00-overview.md @@ -0,0 +1,23 @@ +--- +sidebar_position: 1 +--- + +# Community + +Welcome to the OpenControlPlane community! + +## Get Involved + +- **GitHub**: [openmcp-project](https://github.com/openmcp-project) - Issues, discussions, code +- **Contribute**: Check our [Contributing Guide](https://github.com/openmcp-project/community/blob/main/CONTRIBUTING.md) +- **Join a SIG**: Participate in [Special Interest Groups](./01-sigs.md) +- **Attend Meetings**: See our [meeting schedule](./02-meetings.md) + +## Code of Conduct + +We follow the [Contributor Covenant Code of Conduct](https://github.com/openmcp-project/.github/blob/main/CODE_OF_CONDUCT.md). + +## Related Communities + +- [ApeiroRA](https://apeirora.eu/) - European cloud initiative +- [NeoNephos](https://neonephos.org/) - Cloud-native ecosystem diff --git a/docs/community/01-sigs.md b/docs/community/01-sigs.md new file mode 100644 index 0000000..458c56b --- /dev/null +++ b/docs/community/01-sigs.md @@ -0,0 +1,42 @@ +--- +sidebar_position: 2 +--- + +# Special Interest Groups (SIGs) + +SIGs are the primary organizational unit for focused work within OpenControlPlane. Each SIG has a charter, leads, and regular meetings. + +## SIG Extensibility + +**Focus**: Make it easy to build, share, and adopt extensions—service providers, cluster providers, and platform services. + +| | | +|---|---| +| **Leads** | Maximilian Techritz, Christopher Junk (SAP) | +| **Meetings** | Bi-weekly, Wednesday 3PM CET | +| **Mailing List** | openMCP-extensibility@lists.neonephos.org | +| **Charter** | [GitHub](https://github.com/openmcp-project/community/tree/main/sig-extensibility) | + +### Scope + +**In scope:** +- Developer tooling: templates, frameworks, SDKs +- Increasing service options for end users +- Technical standardization for extensibility + +**Out of scope:** +- Core APIs (ServiceProvider, ClusterProvider, etc.) +- Fundamental platform services (e.g., platform-service-gateway) + +### Subprojects + +- Crossplane service providers +- Landscaper service providers +- Gardener cluster providers +- Kind cluster providers +- Service provider templates +- Testing infrastructure + +## Starting a New SIG + +See the [SIG template](https://github.com/openmcp-project/community/blob/main/sigs/sig-template.md) in the community repo. diff --git a/docs/community/02-meetings.md b/docs/community/02-meetings.md new file mode 100644 index 0000000..93fb71b --- /dev/null +++ b/docs/community/02-meetings.md @@ -0,0 +1,23 @@ +--- +sidebar_position: 3 +--- + +# Community Meetings + +## SIG Extensibility + +| | | +|---|---| +| **When** | Bi-weekly, Wednesday 3PM CET | +| **Where** | [Meeting link TBD] | +| **Notes** | [GitHub Discussions](https://github.com/openmcp-project/community/discussions) | + +### How to Join + +1. Check the meeting schedule above +2. Join via the meeting link at the scheduled time +3. Add agenda items via GitHub discussions before the meeting + +## Past Meeting Notes + +Meeting notes are archived in the [community repo](https://github.com/openmcp-project/community). diff --git a/docs/developers/00-getting-started.md b/docs/developers/00-getting-started.md deleted file mode 100644 index 3c72e6d..0000000 --- a/docs/developers/00-getting-started.md +++ /dev/null @@ -1,9 +0,0 @@ -# Getting Started - -Here you will find all the information you need to get started with our project. Whether you're a beginner or an experienced developer, this guide will help you set up your environment and start contributing. - -Have a look at our [design documentation](./../about/design/service-provider.md) to understand the architecture and design principles behind the project. - -Check our guide on [how to create a Service Provider](./../developers/service-providers.md) to turn the next Kubernetes-native application into an as-a-Service offering. - -Let's get started in building your first Service Provider for the OpenMCP ecosystem! diff --git a/docs/developers/00-getting-started.mdx b/docs/developers/00-getting-started.mdx new file mode 100644 index 0000000..fd3833e --- /dev/null +++ b/docs/developers/00-getting-started.mdx @@ -0,0 +1,13 @@ +--- +sidebar_position: 0 +--- + +# Getting Started + +Here you will find all the information you need to get started with our project. Whether you're a beginner or an experienced developer, this guide will help you set up your environment and start contributing. + +Have a look at our [design documentation](../users/design/service-provider) to understand the architecture and design principles behind the project. + +Check our guide on [how to create a Service Provider](./serviceprovider/service-providers) to turn the next Kubernetes-native application into an as-a-Service offering. + +Let's get started in building your first Service Provider for the OpenControlPlane ecosystem! diff --git a/docs/developers/provider_deployment.md b/docs/developers/_clusterprovider/01-deployment.mdx similarity index 88% rename from docs/developers/provider_deployment.md rename to docs/developers/_clusterprovider/01-deployment.mdx index ff3ba07..2705462 100644 --- a/docs/developers/provider_deployment.md +++ b/docs/developers/_clusterprovider/01-deployment.mdx @@ -1,15 +1,17 @@ -# Provider Deployment +--- +sidebar_position: 1 +--- -The openMCP architecture knows three different kinds of providers: -- `ClusterProviders` manage kubernetes clusters and access to them -- `PlatformServices` provide landscape-wide service functionalities -- `ServiceProviders` provide the actual services that can be consumed by customers via the ManagedControlPlanes +# Deploy -All providers can automatically be deployed via the corresponding provider resources: `ClusterProvider`, `PlatformService`, and `ServiceProvider`. The [openmcp-operator](https://github.com/openmcp-project/openmcp-operator) is responsible for these resources. +`ClusterProviders` manage Kubernetes clusters and access to them within the OpenControlPlane ecosystem. + +They can automatically be deployed via the `ClusterProvider` resource. The [openmcp-operator](https://github.com/openmcp-project/openmcp-operator) is responsible for these resources. + +All providers are cluster-scoped resources. + +## Example ClusterProvider Resource -For now, the spec of all three provider kinds looks exactly the same, which is why they are all explained together. -All of them are cluster-scoped resources. -This is a `ClusterProvider` resource as an example: ```yaml apiVersion: openmcp.cloud/v1alpha1 kind: ClusterProvider @@ -22,12 +24,10 @@ spec: ## Common Provider Contract -This section explains the contract that provider implementations must follow for the deployment to work. +All provider types (ClusterProviders, ServiceProviders, PlatformServices) follow the same deployment contract. ### Executing the Binary -Further information on how the provider binary is executed can be found below. - #### Image Each provider implementation must provide a container image with the provider binary set as an entrypoint. @@ -83,9 +83,12 @@ Providers generally live in the platform cluster, so they can simply access it b This flow is already implemented in the library function [`CreateAndWaitForCluster](https://github.com/openmcp-project/openmcp-operator/blob/v0.11.2/lib/clusteraccess/clusteraccess.go#L387). -### Examples +## Deployment Example + +The `ClusterProvider` resource above will result in the following `Job` and `Deployment` (redacted to the more relevant fields): + +### Init Job -Basically, the `ClusterProvider` from the example above will result in the following `Job` and `Deployment` (redacted to the more relevant fields): ```yaml apiVersion: batch/v1 kind: Job @@ -161,6 +164,9 @@ spec: serviceAccount: gardener-init serviceAccountName: gardener-init ``` + +### Controller Deployment + ```yaml apiVersion: apps/v1 kind: Deployment diff --git a/docs/developers/_clusterprovider/_category_.json b/docs/developers/_clusterprovider/_category_.json new file mode 100644 index 0000000..8ef7669 --- /dev/null +++ b/docs/developers/_clusterprovider/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "Cluster Providers", + "position": 3, + "link": { + "type": "generated-index", + "description": "Cluster Providers manage Kubernetes clusters and provide access to them within the OpenControlPlane ecosystem." + } +} diff --git a/docs/developers/clusterproviders.md b/docs/developers/_clusterprovider/clusterproviders.md similarity index 91% rename from docs/developers/clusterproviders.md rename to docs/developers/_clusterprovider/clusterproviders.md index a7104c9..7e60652 100644 --- a/docs/developers/clusterproviders.md +++ b/docs/developers/_clusterprovider/clusterproviders.md @@ -1,12 +1,12 @@ -# Cluster Providers +# Develop -A *ClusterProvider* is one of the three provider types in the openMCP architecture (the other two being *PlatformService* and *ServiceProvider*). ClusterProviders are responsible for managing kubernetes clusters and access to them, based on our [cluster API](https://github.com/openmcp-project/openmcp-operator/tree/main/api/clusters/v1alpha1). +A *ClusterProvider* is one of the three provider types in the OpenControlPlane architecture (the other two being *PlatformService* and *ServiceProvider*). ClusterProviders are responsible for managing kubernetes clusters and access to them, based on our [cluster API](https://github.com/openmcp-project/openmcp-operator/tree/main/api/clusters/v1alpha1). -This document aims to describe the tasks of a ClusterProvider and the contract that it needs to fulfill in order to work within the openMCP ecosystem. +This document aims to describe the tasks of a ClusterProvider and the contract that it needs to fulfill in order to work within the OpenControlPlane ecosystem. ## Deploying a ClusterProvider -ClusterProviders are usually deployed via the [provider deployment](./provider_deployment.md) mechanism and need to stick to the corresponding contract. +ClusterProviders are usually deployed via the provider deployment mechanism and need to stick to the corresponding contract. ## Implementing a ClusterProvider @@ -44,7 +44,7 @@ spec: version: 1.32.2 ``` -`spec.providerRef` is the name of the ClusterProvider that created this `ClusterProfile`. It should be filled with the value that the provider received via its [`--provider-name`](./provider_deployment.md#arguments) argument. +`spec.providerRef` is the name of the ClusterProvider that created this `ClusterProfile`. It should be filled with the value that the provider received via its `--provider-name` argument. `spec.providerConfigRef` is the name of the provider configuration that is responsible for this profile. Whether this refers to an actual k8s resource, an internal value or just a static string depends on the provider implementation. It is used as a label value though and therefore has to match the corresponding regex. @@ -99,7 +99,7 @@ The rest of the reconciliation logic is pretty much provider specific: If the `C #### Status Reporting -Since creating, updating, or deleting k8s clusters can easily take several minutes, reporting the current status is very important here. It is recommended to make good use of the conditions that are part of the status. ClusterProviders must adhere to the [general status reporting rules](./general.md#status-reporting). +Since creating, updating, or deleting k8s clusters can easily take several minutes, reporting the current status is very important here. It is recommended to make good use of the conditions that are part of the status. ClusterProviders must adhere to general status reporting rules. In addition to the common status, the `Cluster` status contains a few more fields that can be set by the ClusterProvider: - `apiServer` should be filled with the k8s cluster's apiserver endpoint, as soon as it is known. @@ -209,7 +209,7 @@ It modifies the `AccessRequest` in the following way: This means that the AccessRequest controller in a ClusterProvider must only act on AccessRequests that have both of the aforementioned labels set. They can then expect `spec.clusterRef` to be set and don't need to check for `spec.requestRef`. -It is recommended to use [event filtering](./general.md#event-filtering) to avoid reconciling AccessRequests that belong to another provider or have not yet been prepared by the generic controller. The controller-utils library contains a `HasLabelPredicate` filter that can be used for both, verifying existence of a label as well as checking if it has a specific value: +It is recommended to use event filtering to avoid reconciling AccessRequests that belong to another provider or have not yet been prepared by the generic controller. The controller-utils library contains a `HasLabelPredicate` filter that can be used for both, verifying existence of a label as well as checking if it has a specific value: ```go import ( ctrl "sigs.k8s.io/controller-runtime" diff --git a/docs/developers/adrs/_category_.json b/docs/developers/adrs/_category_.json new file mode 100644 index 0000000..2d7381b --- /dev/null +++ b/docs/developers/adrs/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "ADRs", + "position": 99, + "link": { + "type": "generated-index", + "description": "Architectural Decision Records document important technical decisions made in the OpenControlPlane project." + } +} diff --git a/docs/developers/general.md b/docs/developers/general.mdx similarity index 97% rename from docs/developers/general.md rename to docs/developers/general.mdx index 8a018ce..916a6cb 100644 --- a/docs/developers/general.md +++ b/docs/developers/general.mdx @@ -1,6 +1,10 @@ +--- +sidebar_position: 2 +--- + # General Controller Guidelines -This document contains some general guidelines for contributing code to openMCP controllers. The goal is to align the coding and make all controllers look and behave similarly. +This document contains some general guidelines for contributing code to OpenControlPlane controllers. The goal is to align the coding and make all controllers look and behave similarly. ## Reconcile Logic diff --git a/docs/developers/serviceprovider/01-deployment.mdx b/docs/developers/serviceprovider/01-deployment.mdx new file mode 100644 index 0000000..0482462 --- /dev/null +++ b/docs/developers/serviceprovider/01-deployment.mdx @@ -0,0 +1,90 @@ +--- +sidebar_position: 1 +--- + +# Deploy + +`ServiceProviders` provide the actual services that can be consumed by customers via the ManagedControlPlanes within the OpenControlPlane ecosystem. + +They can automatically be deployed via the `ServiceProvider` resource. The [openmcp-operator](https://github.com/openmcp-project/openmcp-operator) is responsible for these resources. + +All providers are cluster-scoped resources. + +## Example ServiceProvider Resource + +```yaml +apiVersion: openmcp.cloud/v1alpha1 +kind: ServiceProvider +metadata: + name: example-service +spec: + image: ghcr.io/openmcp-project/images/service-provider-example:v1.0.0 + verbosity: INFO +``` + +## Common Provider Contract + +All provider types (ClusterProviders, ServiceProviders, PlatformServices) follow the same deployment contract. + +### Executing the Binary + +#### Image + +Each provider implementation must provide a container image with the provider binary set as an entrypoint. + +#### Subcommands + +The provider binary must take two subcommands: +- `init` initializes the provider. This usually means deploying CRDs for custom resources used by the controller(s). + - The `init` subcommand is executed as a job once whenever the deployed version of the provider changes. +- `run` runs the actual controller(s) required for the provider. + - The `run` subcommand is executed in a pod as part of a deployment. + - The pods with the `run` command are only started after the init job has successfully run through. + - It may be run multiple times in parallel (high-availability), so the provider implementation should support this, e.g. via leader election. + +#### Arguments + +Both subcommands take the same arguments, which are explained below. These arguments will always be passed into the provider. +- `--environment` *any lowercase string* + - The *environment* argument is meant to distinguish between multiple environments (=platform clusters) watching the same onboarding cluster. For example, there could be a public environment and another fenced one - both watch the same resources on the same cluster, but only one of them is meant to react on each resource, depending on its configuration. + - Most setups will probably use only a single environment. + - Will likely be set to the landscape name (e.g. `canary`, `live`) most of the time. +- `--provider-name` *any lowercase string* + - This argument contains the name of the k8s provider resource from which this pod was created. + - If ever multiple instances of the same provider are deployed in the same landscape, this value can be used to differentiate between them. +- `--verbosity` or `-v` *enum: ERROR, INFO, or DEBUG* + - This value specifies the desired logging verbosity for the provider. + +#### Environment Variables + +The following environment variables can be expected to be set: +- `POD_NAME` + - Name of the pod the provider binary runs in. +- `POD_NAMESPACE` + - Namespace of the pod the provider binary runs in. +- `POD_IP` + - IP address of the pod the provider binary runs in. +- `POD_SERVICE_ACCOUNT_NAME` + - Name of the service account that is used to run the provider. + +#### Customizations + +While it is possible to customize some aspects of how the provider binary is executed, such as adding additional environment variables, overwriting the subcommands, adding additional arguments, etc., this should only be done in exceptional cases to keep the complexity of setting up an openMCP landscape as low as possible. + +### Configuration + +Passing configuration into the provider binary via a command-line argument is not desired. If the provider requires configuration of some kind, it is expected to read it from one or more k8s resources, potentially even running a controller to reconcile these resources. The `init` subcommand can be used to register the CRDs for the configuration resources, although this leads to the disadvantage of the configuration resource only been known after the provider has already been started, which can cause problems with gitOps (or similar deployment methods that deploy all resources at the same time). + +### Tips and Tricks + +#### Getting Access to the Onboarding Cluster + +Providers generally live in the platform cluster, so they can simply access it by using the in-cluster configuration. Getting access to the onboarding cluster is a little bit more tricky: First, the `Cluster` resource of the onboarding cluster itself or any `ClusterRequest` pointing to it is required. The provider can simply create its own `ClusterRequest` with purpose `onboarding` - a little trick that is possible due to the shared nature of the onboarding cluster, all requests to it will result in a reference to the same `Cluster`. Then, the provider needs to create an `AccessRequest` with the desired permissions and wait until it is ready. This will result in a secret containing a kubeconfig for the onboarding cluster. + +This flow is already implemented in the library function [`CreateAndWaitForCluster](https://github.com/openmcp-project/openmcp-operator/blob/v0.11.2/lib/clusteraccess/clusteraccess.go#L387). + +## Deployment Example + +The `ServiceProvider` resource above will result in similar `Job` and `Deployment` resources as ClusterProviders, with the main difference being the `kind: ServiceProvider` annotation and corresponding labels. + +The deployment structure follows the same pattern with an init job for CRD installation and a controller deployment for the actual service provider logic. diff --git a/docs/developers/service-providers.md b/docs/developers/serviceprovider/02-service-providers.mdx similarity index 97% rename from docs/developers/service-providers.md rename to docs/developers/serviceprovider/02-service-providers.mdx index 5709841..b88923a 100644 --- a/docs/developers/service-providers.md +++ b/docs/developers/serviceprovider/02-service-providers.mdx @@ -1,6 +1,6 @@ -# Service Providers +# Develop -This guide shows you how to create Service Provider for the OpenMCP ecosystem from scratch. Service Providers are the heart of the OpenMCP platform, as they provide the capabilities to offer Infrastructure as Data services to end users. +This guide shows you how to create Service Provider for the OpenControlPlane ecosystem from scratch. Service Providers are the heart of the OpenControlPlane platform, as they provide the capabilities to offer Infrastructure as Data services to end users. In this guide, we will walk you through the steps of creating a Service Provider using the [service-provider-template](https://github.com/openmcp-project/service-provider-template), explain the context a service provider operates in, and demonstrate how to run end-to-end tests for it. @@ -15,7 +15,7 @@ A service provider consists of the following two major parts, similar to a regul - **A user-facing ServiceProviderAPI**: This allows end users to request a `DomainService` for a `ManagedControlPlane`, e.g. `FooService` or `Velero`. - **A controller that reconciles the ServiceProviderAPI**: This controller manages the lifecycle of the provided `DomainService` and its API (such as `Foo` or the CRDs of Velero). -For a visual overview of how these components fit into an openMCP installation, refer to the [service provider deployment model](https://openmcp-project.github.io/docs/about/design/service-provider#deployment-model). +For a visual overview of how these components fit into an OpenControlPlane installation, refer to the [service provider deployment model](https://openmcp-project.github.io/docs/about/design/service-provider#deployment-model). ## Prerequisites @@ -26,7 +26,7 @@ Finally, ensure that you have Go installed. You can download it from [go.dev](ht ## Service Provider Template Usage -The template allows you to create a service provider without requiring deep knowledge of the underlying OpenMCP platform. +The template allows you to create a service provider without requiring deep knowledge of the underlying OpenControlPlane platform. Run the following command to generate a new provider. Replace `velero` with the kind of your service: diff --git a/docs/developers/serviceprovider/03-examples.mdx b/docs/developers/serviceprovider/03-examples.mdx new file mode 100644 index 0000000..d6f3125 --- /dev/null +++ b/docs/developers/serviceprovider/03-examples.mdx @@ -0,0 +1,45 @@ +--- +sidebar_position: 3 +--- + +# Examples + +Community Service Providers that extend OpenControlPlane. Use these as inspiration to build your own integrations or contribute improvements. + +## Official Service Providers + +
+ +
+
+ Crossplane Service Provider +

service-provider-crossplane

+
+
+ Integrates Crossplane into OpenControlPlane for managing cloud infrastructure resources. +
+
+ GitHub +
+
+ +
+
+ Landscaper Service Provider +

service-provider-landscaper

+
+
+ Brings Gardener Landscaper to OpenControlPlane for managing complex cloud-native landscapes. +
+
+ GitHub +
+
+ +
+ +## Community Service Providers + +TODO: start one + +These projects serve as reference implementations for building Service Providers. Check out the [Deployment Guide](./01-deployment.mdx) to create your own, or contribute improvements to existing providers. diff --git a/docs/developers/serviceprovider/_category_.json b/docs/developers/serviceprovider/_category_.json new file mode 100644 index 0000000..4abcba0 --- /dev/null +++ b/docs/developers/serviceprovider/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "Service Providers", + "position": 3, + "link": { + "type": "generated-index", + "description": "Service Providers deliver consumable services to customers via ManagedControlPlanes within the OpenControlPlane ecosystem." + } +} diff --git a/docs/operators/00-getting-started.md b/docs/operators/00-getting-started.md deleted file mode 100644 index bad5562..0000000 --- a/docs/operators/00-getting-started.md +++ /dev/null @@ -1 +0,0 @@ -# Getting Started diff --git a/docs/operators/00-overview.md b/docs/operators/00-overview.md new file mode 100644 index 0000000..ce28af6 --- /dev/null +++ b/docs/operators/00-overview.md @@ -0,0 +1,93 @@ +--- +sidebar_position: 0 +id: bootstrapping-overview +--- + +import Tabs from '@theme/Tabs'; +import CodeBlock from '@theme/CodeBlock'; +import TabItem from '@theme/TabItem'; + +# Overview and Installation + +To set up and and manage OpenControlPlane landscapes, a concept named bootstrapping is used. +Bootstrapping works for creating new landscapes as well as updating existing landscapes with new versions of OpenControlPlane. +The bootstrapping involves the creation of a GitOps process where the desired state of the landscape is stored in a Git repository and is being synced to the actual landscape using FluxCD. +The operator of a landscape can configure the bootstrapping to their liking by providing a bootstrapping configuration that controls the configuration of the openmcp-operator including all desired cluster-providers, service-providers, and platform services. +The bootstrapping is performed by the `openmcp-bootstrapper` command line tool (https://github.com/openmcp-project/bootstrapper). + +## General Bootstrapping Architecture + +```mermaid +flowchart TD + subgraph OCI Registry + A[openMCP Root OCM Component] + B[openmcp-operator] --> A + C[Cluster Provider] --> A + D[Service Provider] --> A + E[Platform Service] --> A + F[GitOps Templates] --> A + end + + subgraph GitRepo[Git Repository] + G[Kustomization] + end + + subgraph Target Kubernetes Cluster + H[GitSource] + I[Kustomization] + I --> G + end + + subgraph openmcp-bootstrapper + J[Bootstrapper CLI] + J --> A + J --> G + J --> H + J --> I + end + + H --> GitRepo +``` + +The `openMCP Root OCM Component` (github.com/openmcp-project/openmcp) contains references to the `openmcp-operator`, the `gitops-templates` (github.com/openmcp-project/gitops-templates) as well as a list of cluster providers, service providers and platform services that can be deployed. +The `openMCP Root OCM Component` acts as the source of the available versions, image locations and deployment configuration of an openMCP landscape. + +The `Git Repository` contains the desired state of the openMCP landscape. The desired state is encoded in a set of Kubernetes manifests that are organized and templated using Kustomize. The `Git Repository` is being updated by the `openmcp-bootstrapper` CLI tool for the information provided in the `openMCP Root OCM Component` as well as the bootstrapping configuration provided by the operator. + +The `openmcp-bootstrapper` reads the `openMCP Root OCM Component` from an OCI registry to retrieve the `GitOps Templates` as well as the image locations of the FluxCD tool, the `openmcp-operator`, the cluster providers, the service providers and the platform services. The templated `GitOpsTemplate` is applied to the `Git Repository` and the templated FluxCD deployment is applied to the `Target Kubernetes Cluster`. The `openmcp-bootstrapper` also creates a FluxCD `GitSource` based on the provided Git repository URL and credentials. +The `openmcp-bootstrapper` then creates a FluxCD `Kustomizations` that points to the `Git Repository` and applies it to the `Target Kubernetes Cluster`. + +## Prerequisites + +* A target Kubernetes cluster that matches the desired cluster provider being used (e.g. `Kind` for local testing, `Gardener` for Gardener Shoots) +* A Git repository that will be used to store the desired state of the openMCP landscape +* An OCI registry that contains the `openMCP Root OCM Component` (e.g. `ghcr.io/openmcp-project`) + +:::info +The Git repository used in the following examples must exist before running the `openmcp-bootstrapper` CLI tool. The `openmcp-bootstrapper` is using the default branch (like `main`) as a source to create the desired branch. +The default branch may not be empty, but it should not contain any files or folders that would conflict with the files and folders created by the `openmcp-bootstrapper`. A recommendation is to create an empty repository with a `README.md` file. +::: + +## Download the `openmcp-bootstrapper` CLI tool + +The `openmcp-bootstrapper` CLI tool can be downloaded as an OCI image from an OCI registry (e.g. `ghcr.io/openmcp-project`). +In this example docker will be used to run the `openmcp-bootstrapper` CLI tool. If you don't use docker, adjust the command accordingly. + +Retrieve the latest version of the `openmcp-bootstrapper`: + +```shell +TAG=$(curl -s "https://api.github.com/repos/openmcp-project/bootstrapper/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) +export OPENMCP_BOOTSTRAPPER_VERSION="${TAG}" +``` + +Pull the latest version of the `openmcp-bootstrapper`: + +```shell +docker pull ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} +``` + +## Next Steps + +Choose your cluster provider to continue: +- [Kind Provider](./01-kind-provider.md) - For local testing and development +- [Gardener Provider](./02-gardener-provider.md) - For production Gardener-based landscapes diff --git a/docs/operators/01-boostrapping.md b/docs/operators/01-boostrapping.md deleted file mode 100644 index 00be3cd..0000000 --- a/docs/operators/01-boostrapping.md +++ /dev/null @@ -1,2122 +0,0 @@ -import Tabs from '@theme/Tabs'; -import CodeBlock from '@theme/CodeBlock'; -import TabItem from '@theme/TabItem'; - -# openMCP Landscape Bootstrapping - -To set up and and manage openMCP landscapes, a concept named bootstrapping is used. -Bootstrapping works for creating new landscapes as well as updating existing landscapes with new versions of openMCP. -The bootstrapping involves the creation of a GitOps process where the desired state of the landscape is stored in a Git repository and is being synced to the actual landscape using FluxCD. -The operator of a landscape can configure the bootstrapping to their liking by providing a bootstrapping configuration that controls the configuration of the openmcp-operator including all desired cluster-providers, service-providers, and platform services. -The bootstrapping is performed by the `openmcp-bootstrapper` command line tool (https://github.com/openmcp-project/bootstrapper). - -## General Bootstrapping Architecture - -```mermaid -flowchart TD - subgraph OCI Registry - A[openMCP Root OCM Component] - B[openmcp-operator] --> A - C[Cluster Provider] --> A - D[Service Provider] --> A - E[Platform Service] --> A - F[GitOps Templates] --> A - end - - subgraph GitRepo[Git Repository] - G[Kustomization] - end - - subgraph Target Kubernetes Cluster - H[GitSource] - I[Kustomization] - I --> G - end - - subgraph openmcp-bootstrapper - J[Bootstrapper CLI] - J --> A - J --> G - J --> H - J --> I - end - - H --> GitRepo -``` - -The `openMCP Root OCM Component` (github.com/openmcp-project/openmcp) contains references to the `openmcp-operator`, the `gitops-templates` (github.com/openmcp-project/gitops-templates) as well as a list of cluster providers, service providers and platform services that can be deployed. -The `openMCP Root OCM Component` acts as the source of the available versions, image locations and deployment configuration of an openMCP landscape. - -The `Git Repository` contains the desired state of the openMCP landscape. The desired state is encoded in a set of Kubernetes manifests that are organized and templated using Kustomize. The `Git Repository` is being updated by the `openmcp-bootstrapper` CLI tool for the information provided in the `openMCP Root OCM Component` as well as the bootstrapping configuration provided by the operator. - -The `openmcp-bootstrapper` reads the `openMCP Root OCM Component` from an OCI registry to retrieve the `GitOps Templates` as well as the image locations of the FluxCD tool, the `openmcp-operator`, the cluster providers, the service providers and the platform services. The templated `GitOpsTemplate` is applied to the `Git Repository` and the templated FluxCD deployment is applied to the `Target Kubernetes Cluster`. The `openmcp-bootstrapper` also creates a FluxCD `GitSource` based on the provided Git repository URL and credentials. -The `openmcp-bootstrapper` then creates a FluxCD `Kustomizations` that points to the `Git Repository` and applies it to the `Target Kubernetes Cluster`. - -### Prerequisites - -* A target Kubernetes cluster that matches the desired cluster provider being used (e.g. `Kind` for local testing, `Gardener` for Gardener Shoots) -* A Git repository that will be used to store the desired state of the openMCP landscape -* An OCI registry that contains the `openMCP Root OCM Component` (e.g. `ghcr.io/openmcp-project`) - -:::info -The Git repository used in the following examples must exist before running the `openmcp-bootstrapper` CLI tool. The `openmcp-bootstrapper` is using the default branch (like `main`) as a source to create the desired branch. -The default branch may not be empty, but it should not contain any files or folders that would conflict with the files and folders created by the `openmcp-bootstrapper`. A recommendation is to create an empty repository with a `README.md` file. -::: - -#### Download the `openmcp-bootstrapper` CLI tool - -The `openmcp-bootstrapper` CLI tool can be downloaded as an OCI image from an OCI registry (e.g. `ghcr.io/openmcp-project`). -In this example docker will be used to run the `openmcp-bootstrapper` CLI tool. If you don't use docker, adjust the command accordingly. - -Retrieve the latest version of the `openmcp-bootstrapper`: - -```shell -TAG=$(curl -s "https://api.github.com/repos/openmcp-project/bootstrapper/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) -export OPENMCP_BOOTSTRAPPER_VERSION="${TAG}" -``` - -Pull the latest version of the `openmcp-bootstrapper`: - -```shell -docker pull ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} -``` - -## Example using the Kind Cluster Provider - -### Requirements - -* [Docker](https://docs.docker.com/get-docker/) installed and running. Docker alternatively can be replaced with another OCI runtime (e.g. Podman) that can run the `openmcp-bootstrapper` CLI tool as an OCI image. -* [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/) installed - -:::info -If you are using a docker alternative, make sure that it is correctly setup regarding Docker compatibility. In case of Podman, you should find a corresponding configuration under `Settings` in the Podman UI. -::: - -### Create a configuration folder - -Create a directory that will be used to store the configuration files and the kubeconfig files. -To keep this example simple, we will use a single directory named `config` in the current working directory. - -```shell -mkdir config -``` - -All following examples will use the `config` directory as the configuration directory. If you use a different directory, replace all occurrences of `config` with your desired directory path. - -Create a directory named `kubeconfigs` in the configuration folder to store the kubeconfig files of the created clusters. - -```shell -mkdir kubeconfigs -``` - -### Create the Kind configuration file (kind-config.yaml) in the configuration folder - -```yaml -apiVersion: kind.x-k8s.io/v1alpha4 -kind: Cluster -nodes: -- role: control-plane - extraMounts: - - hostPath: /var/run/docker.sock - containerPath: /var/run/host-docker.sock -``` - -### Create the Kind cluster - -Create the Kind cluster using the configuration file created in the previous step. - -:::warning - -Please check if your current `kind` network has a `/16` subnet. This is required for our cluster-provider-kind. -You can check the current network configuration using: - -```shell -docker network inspect kind | jq ".[].IPAM.Config.[].Subnet" -"172.19.0.0/16" -``` - -If the result is not specifying `/16` but something smaller like `/24` you need to delete the network and create a new one. For that **all kind clusters needs to be deleted**. Then run: - -```shell -docker network rm kind - -docker network create kind --subnet 172.19.0.0/16 -``` - -::: - -:::info Podman Support -In case you are using Podman instead of Docker, it is currently required to first create a suitable network for the Kind cluster by executing the following command before creating the Kind cluster itself. - -```shell -podman network create kind --subnet 172.19.0.0/16 -``` - -::: - -```shell -kind create cluster --name platform --config ./config/kind-config.yaml -``` - -Export the internal kubeconfig of the Kind cluster to a file named `platform-int.kubeconfig` in the configuration folder. - -```shell -kind get kubeconfig --internal --name platform > ./kubeconfigs/platform-int.kubeconfig -``` - -### Create a bootstrapping configuration file (bootstrapper-config.yaml) in the configuration folder - -Replace `` and `` with your Git organization and repository name. -The environment can be set to the logical environment name (e.g. `dev`, `prod`, `live-eu-west`) that will be used in the Git repository to separate different environments. -The branch can be set to the desired branch name in the Git repository that will be used to store the desired state of the openMCP landscape. - -Get the latest version of the `github.com/openmcp-project/openmcp` root component: - -```shell -TAG=$(curl -s "https://api.github.com/repos/openmcp-project/openmcp/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) -echo "${TAG}" -``` - -In the bootstrapper configuration, replace `` with the latest version of the `github.com/openmcp/openmcp` root component: - -```yaml title="config/bootstrapper-config.yaml" -component: - location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: - -repository: - url: https://github.com// - pushBranch: - -environment: - -openmcpOperator: - config: {} -``` - -### Create a Git configuration file (git-config.yaml) in the configuration folder - -For GitHub use a personal access token with `repo` write permissions. -It is also possible to use a fine-grained token. In this case, it requires read and write permissions for `Contents`. - -```yaml title="config/git-config.yaml" -auth: - basic: - username: "" - password: "" -``` - -### Run the `openmcp-bootstrapper` CLI tool and deploy FluxCD to the Kind cluster - -```shell -docker run --rm --network kind -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} deploy-flux --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform-int.kubeconfig /config/bootstrapper-config.yaml -``` - -You should see output similar to the following: - -```shell -Info: Starting deployment of Flux controllers with config file: /config/bootstrapper-config.yaml. -Info: Ensure namespace flux-system exists -Info: Creating/updating git credentials secret flux-system/git -Info: Created/updated git credentials secret flux-system/git -Info: Creating working directory for gitops-templates -Info: Downloading templates -/tmp/openmcp.cloud.bootstrapper-3041773446/download: 9 file(s) with 691073 byte(s) written -Info: Arranging template files -Info: Arranged template files -Info: Applying templates from gitops-templates/fluxcd to deployment repository -Info: Kustomizing files in directory: /tmp/openmcp.cloud.bootstrapper-3041773446/repo/envs/dev/fluxcd -Info: Applying flux deployment objects -Info: Deployment of flux controllers completed -``` - -### Inspect the deployed FluxCD controllers and Kustomization - -Load the kubeconfig of the Kind cluster and check the deployed FluxCD controllers and the created GitRepository and Kustomization. - -```shell -kind get kubeconfig --name platform > ./kubeconfigs/platform.kubeconfig -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n flux-system -``` - -You should see output similar to the following: - -```shell -NAME READY STATUS RESTARTS AGE -helm-controller-648cdbf8d8-8jhnf 1/1 Running 0 9m37s -image-automation-controller-56df4c78dc-qwmfm 1/1 Running 0 9m35s -image-reflector-controller-56f69fcdc9-pgcgx 1/1 Running 0 9m35s -kustomize-controller-b4c4dcdc8-g49gc 1/1 Running 0 9m38s -notification-controller-59d754d599-w7fjp 1/1 Running 0 9m36s -source-controller-6b45b6464f-jbgb6 1/1 Running 0 9m38 -``` - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A -```` - -You should see output similar to the following: - -```shell -NAMESPACE NAME URL AGE READY STATUS -flux-system environments https://github.com// 86s False failed to checkout and determine revision: unable to clone 'https://github.com//': couldn't find remote ref "refs/heads/" -``` - -This error is expected as the branch does not exist yet in the Git repository. The `openmcp-bootstrapper` will create the branch in the next step. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME AGE READY STATUS -flux-system flux-system 3m15s False Source artifact not found, retrying in 30s -``` - -This error is also expected as the GitRepository does not exist yet. The `openmcp-bootstrapper` will create the GitRepository in the next step. - -### Run the `openmcp-bootstrapper` CLI tool to deploy openMCP to the Kind cluster - -Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the kind cluster provider and the openmcp-operator configuration. - -```yaml title="config/bootstrapper-config.yaml" -component: - location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: - -repository: - url: https://github.com// - pushBranch: - -environment: - -providers: - clusterProviders: - - name: kind - config: - extraVolumeMounts: - - mountPath: /var/run/docker.sock - name: docker - extraVolumes: - - name: docker - hostPath: - path: /var/run/host-docker.sock - type: Socket - -openmcpOperator: - config: - managedControlPlane: - mcpClusterPurpose: mcp-worker - reconcileMCPEveryXDays: 7 - scheduler: - scope: Cluster - purposeMappings: - mcp: - template: - spec: - profile: kind - tenancy: Exclusive - mcp-worker: - template: - spec: - profile: kind - tenancy: Exclusive - platform: - template: - metadata: - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: kind - tenancy: Shared - onboarding: - template: - metadata: - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: kind - tenancy: Shared - workload: - tenancyCount: 20 - template: - spec: - profile: kind - tenancy: Shared -``` - -```shell -docker run --rm --network kind -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform-int.kubeconfig /config/bootstrapper-config.yaml -``` - -You should see output similar to the following: - -```shell -Info: Downloading component ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp:v0.0.20 -Info: Creating template transformer -Info: Downloading template resources -/tmp/openmcp.cloud.bootstrapper-2402093624/transformer/download/fluxcd: 9 file(s) with 691073 byte(s) written -/tmp/openmcp.cloud.bootstrapper-2402093624/transformer/download/openmcp: 8 file(s) with 6625 byte(s) written -Info: Transforming templates into deployment repository structure -Info: Fetching openmcp-operator component version -Info: Cloning deployment repository https://github.com/reshnm/template-test -Info: Checking out or creating branch kind -Info: Applying templates from "gitops-templates/fluxcd"/"gitops-templates/openmcp" to deployment repository -Info: Templating providers: clusterProviders=[{kind [123 34 101 120 116 114 97 86 111 108 117 109 101 77 111 117 110 116 115 34 58 91 123 34 109 111 117 110 116 80 97 116 104 34 58 34 47 118 97 114 47 114 117 110 47 100 111 99 107 101 114 46 115 111 99 107 34 44 34 110 97 109 101 34 58 34 100 111 99 107 101 114 34 125 93 44 34 101 120 116 114 97 86 111 108 117 109 101 115 34 58 91 123 34 104 111 115 116 80 97 116 104 34 58 123 34 112 97 116 104 34 58 34 47 118 97 114 47 114 117 110 47 104 111 115 116 45 100 111 99 107 101 114 46 115 111 99 107 34 44 34 116 121 112 101 34 58 34 83 111 99 107 101 116 34 125 44 34 110 97 109 101 34 58 34 100 111 99 107 101 114 34 125 93 44 34 118 101 114 98 111 115 105 116 121 34 58 34 100 101 98 117 103 34 125] map[extraVolumeMounts:[map[mountPath:/var/run/docker.sock name:docker]] extraVolumes:[map[hostPath:map[path:/var/run/host-docker.sock type:Socket] name:docker]] verbosity:debug]}], serviceProviders=[], platformServices=[], imagePullSecrets=[] -Info: Applying Custom Resource Definitions to deployment repository -/tmp/openmcp.cloud.bootstrapper-2402093624/repo/resources/openmcp/crds: 8 file(s) with 475468 byte(s) written -/tmp/openmcp.cloud.bootstrapper-2402093624/repo/resources/openmcp/crds: 1 file(s) with 1843 byte(s) written -Info: No extra manifest directory specified, skipping -Info: Committing and pushing changes to deployment repository -Info: Created commit: 287f9e88b905371bba412b5d0286ad02db0f4aac -Info: Running kustomize on /tmp/openmcp.cloud.bootstrapper-2402093624/repo/envs/dev -Info: Applying Kustomization manifest: default/bootstrap - -``` - -### Inspect the Git repository - -The desired state of the openMCP landscape has now been created in the Git repository and should look similar to the following structure: - -```shell -. -├── envs -│   └── dev -│   ├── fluxcd -│   │   ├── flux-kustomization.yaml -│   │   ├── gitrepo.yaml -│   │   └── kustomization.yaml -│   ├── kustomization.yaml -│   ├── openmcp -│   │   ├── config -│   │   │   └── openmcp-operator-config.yaml -│   │   └── kustomization.yaml -│   └── root-kustomization.yaml -└── resources - ├── fluxcd - │   ├── components.yaml - │   ├── flux-kustomization.yaml - │   ├── gitrepo.yaml - │   └── kustomization.yaml - ├── kustomization.yaml - ├── openmcp - │   ├── cluster-providers - │   │   └── kind.yaml - │   ├── crds - │   │   ├── clusters.openmcp.cloud_accessrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusterprofiles.yaml - │   │   ├── clusters.openmcp.cloud_clusterrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusters.yaml - │   │   ├── kind.clusters.openmcp.cloud_providerconfigs.yaml - │   │   ├── openmcp.cloud_clusterproviders.yaml - │   │   ├── openmcp.cloud_platformservices.yaml - │   │   └── openmcp.cloud_serviceproviders.yaml - │   ├── deployment.yaml - │   ├── kustomization.yaml - │   ├── namespace.yaml - │   └── rbac.yaml - └── root-kustomization.yaml -``` - -The `envs/` folder contains the Kustomization files that are used by FluxCD to deploy openMCP to the Kind cluster. -The `resources` folder contains the base resources that are used by the Kustomization files in the `envs/` folder. - -## Inspect the Kustomizations in the Kind cluster - -Force an update of the GitRepository and Kustomization in the Kind cluster to pick up the changes made in the Git repository. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system patch kustomization flux-system --type merge -p '{"spec":{"force":true}}' -``` - -Get the status of the GitRepository in the Kind cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME URL AGE READY STATUS -flux-system environments https://github.com// 9m6s True stored artifact for revision 'docs@sha1:...' -``` - -So we have now successfully configured FluxCD to watch for changes in the specified GitHub repository, using the `environments` custom resource of kind `GitRepository`. -Now let's get the status of the Kustomization in the Kind cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME AGE READY STATUS -default bootstrap 5m31s True Applied revision: docs@sha1:... -flux-system flux-system 10m True Applied revision: docs@sha1:... -``` - -You can see that there are now two Kustomizations in the Kind cluster. -The `flux-system` Kustomization is used to deploy the FluxCD controllers and the `bootstrap` Kustomization is used to deploy openMCP to the Kind cluster. - -### Inspect the deployed openMCP components in the Kind cluster - -Now check the deployed openMCP components. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system -``` - -You should see output similar to the following: - -```shell -NAME READY STATUS RESTARTS AGE -cp-kind-6b4886b7cf-z54pg 1/1 Running 0 20s -cp-kind-init-msqg7 0/1 Completed 0 27s -openmcp-operator-5f784f47d7-nfg65 1/1 Running 0 34s -ps-managedcontrolplane-668c99c97c-9jltx 1/1 Running 0 4s -ps-managedcontrolplane-init-49rx2 0/1 Completed 0 27s -``` - -So now, the openmcp-operator, the managedcontrolplane platform service and the cluster provider kind are running. -You are now ready to create and manage clusters using openMCP. - -### Get Access to the Onboarding Cluster - -The openmcp-operator should now have created a `onboarding Cluster` resource on the platform cluster that represents the onboarding cluster. -The onboarding cluster is a special cluster that is used to create new managed control planes. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusters.clusters.openmcp.cloud -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME PURPOSES PHASE VERSION PROVIDER AGE -openmcp-system onboarding ["onboarding"] Ready 11m -``` - -Now you can retrieve the kubeconfig of the onboarding cluster. -Use `kind` to retrieve the list of available clusters. - -```shell -kind get clusters -``` - -You should see output similar to the following: - -```shell -onboarding.12345678 -platform -``` - -You can now see the new onboarding cluster. -Get the kubeconfig of the onboarding cluster and save it to a file named `onboarding.kubeconfig` in the configuration folder. -Please replace `onboarding.12345678` with the actual name of your onboarding cluster. - -```shell -kind get kubeconfig --name onboarding.12345678 > ./kubeconfigs/onboarding.kubeconfig -``` - -### Create a Managed Control Plane - -Create a file named `my-mcp.yaml` with the following content in the configuration folder: - -```yaml title="config/my-mcp.yaml" -apiVersion: core.openmcp.cloud/v2alpha1 -kind: ManagedControlPlaneV2 -metadata: - name: my-mcp - namespace: default -spec: - iam: {} -``` - -Apply the file to the onboarding cluster: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig apply -f ./config/my-mcp.yaml -``` - -The openmcp-operator should start to create the necessary resources in order to create the managed control plane. As a result, a new `Managed Control Plane` should be available soon. -You can check the status of the Managed Control Plane using the following command: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get managedcontrolplanev2 -n default my-mcp -o yaml -``` - -You should see output similar to the following: - -```yaml -apiVersion: core.openmcp.cloud/v2alpha1 -kind: ManagedControlPlaneV2 -metadata: - finalizers: - - core.openmcp.cloud/mcp - - request.clusters.openmcp.cloud/sample - name: sample - namespace: default -spec: - iam: {} -status: - conditions: - - lastTransitionTime: "2025-09-16T13:03:55Z" - message: All accesses are ready - observedGeneration: 1 - reason: AllAccessReady_True - status: "True" - type: AllAccessReady - - lastTransitionTime: "2025-09-16T13:03:55Z" - message: Cluster conditions have been synced to MCP - observedGeneration: 1 - reason: ClusterConditionsSynced_True - status: "True" - type: ClusterConditionsSynced - - lastTransitionTime: "2025-09-16T13:03:55Z" - message: ClusterRequest is ready - observedGeneration: 1 - reason: ClusterRequestReady_True - status: "True" - type: ClusterRequestReady - - lastTransitionTime: "2025-09-16T13:03:50Z" - message: "" - observedGeneration: 1 - reason: Meta_True - status: "True" - type: Meta - observedGeneration: 1 - phase: Ready -``` - -You should see that the Managed Control Plane is in phase `Ready`. -The openmcp-operator should now have created a new Kind cluster that represents the Managed Control Plane. -You can check the list of available Kind clusters using the following command: - -```shell -kind get clusters -``` - -You should see output similar to the following: - -```shell -mcp-worker-abcde.87654321 -onboarding.12345678 -platform -``` - -You can now get the kubeconfig of the managed control plane and save it to a file named `my-mcp.kubeconfig` in the kubeconfigs folder. Please replace `mcp-worker-abcde.87654321` with the actual name of your managed control plane cluster. - -```shell -kind get kubeconfig --name mcp-worker-abcde.87654321 > ./kubeconfigs/my-mcp.kubeconfig -``` - -You can now use the kubeconfig to access the Managed Control Plane cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/my-mcp.kubeconfig get namespaces -``` - -### Deploy the Crossplane Service Provider - -Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the crossplane service provider. - -```yaml title="config/bootstrapper-config.yaml" -component: - location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: - -repository: - url: https://github.com// - pushBranch: - -environment: - -providers: - clusterProviders: - - name: kind - config: - extraVolumeMounts: - - mountPath: /var/run/docker.sock - name: docker - extraVolumes: - - name: docker - hostPath: - path: /var/run/host-docker.sock - type: Socket - serviceProviders: - - name: crossplane - -openmcpOperator: - config: - managedControlPlane: - mcpClusterPurpose: mcp-worker - reconcileMCPEveryXDays: 7 - scheduler: - scope: Cluster - purposeMappings: - mcp: - template: - spec: - profile: kind - tenancy: Exclusive - mcp-worker: - template: - spec: - profile: kind - tenancy: Exclusive - platform: - template: - metadata: - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: kind - tenancy: Shared - onboarding: - template: - metadata: - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: kind - tenancy: Shared - workload: - tenancyCount: 20 - template: - spec: - profile: kind - tenancy: Shared -``` - -Create a new folder named `extra-manifests` in the configuration folder. Then create a file named `crossplane-provider.yaml` with the following content, and save it in the new `extra-manifests` folder. - -:::info -Note that service provider crossplane only supports the installation of crossplane from an OCI registry. Replace the chart locations in the `ProviderConfig` with the OCI registry where you mirror your crossplane chart versions. OpenMCP will provide this as part of an open source [Releasechannel](https://github.com/openmcp-project/backlog/issues/323) in an upcoming update. -::: - -```yaml title="config/extra-manifests/crossplane-provider.yaml" -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: ProviderConfig -metadata: - name: default -spec: - versions: - - version: v2.0.2 - chart: - url: ghcr.io/openmcp-project/charts/crossplane:2.0.2 - image: - url: xpkg.crossplane.io/crossplane/crossplane:v2.0.2 - - version: v1.20.1 - chart: - url: ghcr.io/openmcp-project/charts/crossplane:1.20.1 - image: - url: xpkg.crossplane.io/crossplane/crossplane:v1.20.1 - providers: - availableProviders: - - name: provider-kubernetes - package: xpkg.upbound.io/upbound/provider-kubernetes - versions: - - v0.16.0 -``` - -Run the `openmcp-bootstrapper` CLI tool to update the Git repository and deploy the crossplane service provider to the Kind cluster. - -```shell -docker run --rm --network kind -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform-int.kubeconfig --extra-manifest-dir /config/extra-manifests /config/bootstrapper-config.yaml -``` - -See the `--extra-manifest-dir` parameter that points to the folder containing the extra manifest file created in the previous step. All manifest files in this folder will be added to the Kustomization used by FluxCD to deploy openMCP to the Kind cluster. - -The git repository should now be updated: - -```shell -. -├── envs -│   └── dev -│   ├── fluxcd -│   │   ├── flux-kustomization.yaml -│   │   ├── gitrepo.yaml -│   │   └── kustomization.yaml -│   ├── kustomization.yaml -│   ├── openmcp -│   │   ├── config -│   │   │   └── openmcp-operator-config.yaml -│   │   └── kustomization.yaml -│   └── root-kustomization.yaml -└── resources - ├── fluxcd - │   ├── components.yaml - │   ├── flux-kustomization.yaml - │   ├── gitrepo.yaml - │   └── kustomization.yaml - ├── kustomization.yaml - ├── openmcp - │   ├── cluster-providers - │   │   └── kind.yaml - │   ├── crds - │   │   ├── clusters.openmcp.cloud_accessrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusterprofiles.yaml - │   │   ├── clusters.openmcp.cloud_clusterrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusters.yaml - │   │   ├── crossplane.services.openmcp.cloud_providerconfigs.yaml - │   │   ├── kind.clusters.openmcp.cloud_providerconfigs.yaml - │   │   ├── openmcp.cloud_clusterproviders.yaml - │   │   ├── openmcp.cloud_platformservices.yaml - │   │   └── openmcp.cloud_serviceproviders.yaml - │   ├── deployment.yaml - │   ├── extra - │   │   └── crossplane-providers.yaml - │   ├── kustomization.yaml - │   ├── namespace.yaml - │   ├── rbac.yaml - │   └── service-providers - │   └── crossplane.yaml - └── root-kustomization.yaml -``` - -After a while, the Kustomization in the Kind cluster should be updated and the crossplane service provider should be deployed: -You can force an update of the Kustomization in the Kind cluster to pick up the changes made in the Git repository. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n default patch kustomization bootstrap --type merge -p '{"spec":{"force":true}}' -``` - -List the pods in the `openmcp-system` namespace again: - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system -```` - -You should see output similar to the following: - -```shell -NAME READY STATUS RESTARTS AGE -cp-kind-6b4886b7cf-z54pg 1/1 Running 0 18m -cp-kind-init-msqg7 0/1 Completed 0 18m -openmcp-operator-5f784f47d7-nfg65 1/1 Running 0 18m -ps-managedcontrolplane-668c99c97c-9jltx 1/1 Running 0 18m -ps-managedcontrolplane-init-49rx2 0/1 Completed 0 18m -sp-crossplane-6b8cccc775-9hx98 1/1 Running 0 105s -sp-crossplane-init-6hvf4 0/1 Completed 0 2m11s -``` - -You should see that the crossplane service provider is running. This means that from now on, the openMCP is able to provide Crossplane service instances, using the new service provider Crossplane. - -### Create a Crossplane service instance on the onboarding cluster - -Create a file named `crossplane-instance.yaml` with the following content in the configuration folder: - -```yaml title="config/crossplane-instance.yaml" -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: Crossplane -metadata: - name: my-mcp - namespace: default -spec: - version: v1.20.0 - providers: - - name: provider-kubernetes - version: v0.16.0 -``` - -Apply the file to onboarding cluster: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig apply -f ./config/crossplane-instance.yaml -``` - -The Crossplane service provider should now start to create the necessary resources for the new Crossplane instance. As a result, a new Crossplane service instance should soon be available. -You can check the status of the Crossplane instance using the following command: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get crossplane -n default my-mcp -o yaml -``` - -After a while, you should see output similar to the following: - -```yaml -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: Crossplane -metadata: - finalizers: - - openmcp.cloud/finalizers - generation: 1 - name: sample - namespace: default -spec: - providers: - - name: provider-kubernetes - version: v0.16.0 - version: v1.20.0 -status: - conditions: - - lastTransitionTime: "2025-09-16T14:09:56Z" - message: Crossplane is healthy. - reason: Healthy - status: "True" - type: CrossplaneReady - - lastTransitionTime: "2025-09-16T14:10:01Z" - message: ProviderKubernetes is healthy. - reason: Healthy - status: "True" - type: ProviderKubernetesReady - observedGeneration: 0 - phase: "" -``` - -Crossplane and the provider Kubernetes should now be available on the mcp cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/my-mcp.kubeconfig api-resources | grep 'crossplane\|kubernetes' -``` - -## Example using the Gardener Cluster Provider - -### Requirements - -* A running Gardener installation (see the [Gardener documentation](https://gardener.cloud/docs/) for more information on Gardener) -* A Gardener project in which the clusters will be created -* An infrastructure secret in the Gardener project (see the [Gardener documentation](https://gardener.cloud/docs/getting-started/project/#infrastructure-secrets) for more information on how to create an infrastructure secret) -* Kubectl (see the [Kubectl installation guide](https://kubernetes.io/docs/tasks/tools/#kubectl) for more information on how to install kubectl) -* If the Gardener installation is using OIDC for authentication, install the [OIDC kubectl plugin](https://github.com/int128/kubelogin) -* Good understanding of Gardener and how to create Gardener Shoot clusters and Service Accounts in Gardener Projects. - -### Create a configuration folder - -Create a directory that will be used to store the configuration files and the kubeconfig files. -To keep this example simple, we will use a single directory named `config` in the current working directory. - -```shell -mkdir config -``` - -All following examples will use the `config` directory as the configuration directory. If you use a different directory, replace all occurrences of `config` with your desired directory path. - -Create a directory named `kubeconfigs` in the configuration folder to store the kubeconfig files of the created clusters. - -```shell -mkdir kubeconfigs -``` - -### Create a Gardener Shoot for the Platform Cluster - -openMCP requires a running Kubernetes cluster that acts as the platform cluster. -The platform cluster hosts the openmcp-operator and all service providers, cluster providers and platform services. -In this example, we will create a Gardener Shoot cluster that acts as the platform cluster. See the [Gardener documentation](https://gardener.cloud/docs/getting-started/shoots/) for more information on how to create a Gardener Shoot cluster. - -Create a script folder named `scripts`: - -```shell -mkdir scripts -``` - -Create a file named `get-shoot-kubeconfig.sh` in the `scripts` folder with the following content: - -```shell title="scripts/get-shoot-kubeconfig.sh" -#!/usr/bin/env bash - -GARDENER_SECRET=$1 -NAMESPACE="garden-$2" -SHOOT_NAME=$3 - -REQUEST_PATH="$(mktemp -d)" -REQUEST="${REQUEST_PATH}/admin-kubeconfig-request.json" - -echo "{ \"apiVersion\": \"authentication.gardener.cloud/v1alpha1\", \"kind\": \"AdminKubeconfigRequest\", \"spec\": { \"expirationSeconds\": 7776000 } }" > ${REQUEST} 2>/dev/null - -KUBECONFIG=$(kubectl --kubeconfig "${GARDENER_SECRET}" create \ - -f ${REQUEST} \ - --raw /apis/core.gardener.cloud/v1beta1/namespaces/${NAMESPACE}/shoots/${SHOOT_NAME}/adminkubeconfig 2>/dev/null | jq -r ".status.kubeconfig" | base64 -d) - - -echo "${KUBECONFIG}" -``` - -Make the script executable: - -```shell -chmod +x ./scripts/get-shoot-kubeconfig.sh -``` - -In order to execute this script, you need a kubeconfig file that has access to the Gardener installation. This can be aquired by navigating to the Gardener dashboard, then selecting your user (icon in the upper right corner) -> click 'My Account' and under `Access` download the Kubeconfig file. - -Alternatively, you can create a service account with the `Admin` role in the Gardener project and then retrieve the kubeconfig for the service account. See the [Gardener documentation](https://gardener.cloud/docs/getting-started/project/#service-accounts) for more information on how to create a service account. - -Now, create a new Gardener Shoot cluster in your Gardener project using the Gardener dashboard or the Gardener API via kubectl. The name of the Shoot cluster shall be `platform`. -Please consult the [Gardener documentation](https://gardener.cloud/docs/getting-started/shoots/) for more information on how to create a Gardener Shoot cluster. - -Download the admin kubeconfig of the `platform` Shoot cluster using the script created above (`get-shoot-kubeconfig.sh`) and save it to a file named `platform.kubeconfig` in the `kubeconfigs` folder. - -```shell -./scripts/get-shoot-kubeconfig.sh platform > ./kubeconfigs/platform.kubeconfig -``` - -### Create a bootstrapping configuration file (bootstrapper-config.yaml) in the configuration folder - -Replace `` and `` with your Git organization and repository name. -The environment can be set to the logical environment name (e.g. `dev`, `prod`, `live-eu-west`) that will be used in the Git repository to separate different environments. -The branch can be set to the desired branch name in the Git repository that will be used to store the desired state of the openMCP landscape. - -Get the latest version of the `github.com/openmcp/openmcp` root component: - -```shell -TAG=$(curl -s "https://api.github.com/repos/openmcp-project/openmcp/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) -echo "${TAG}" -``` - -In the bootstrapper configuration, replace `` with the latest version of the `github.com/openmcp-project/openmcp` root component: - -```yaml title="config/bootstrapper-config.yaml" -component: - location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: - -repository: - url: https://github.com// - pushBranch: - -environment: - -openmcpOperator: - config: {} -``` - -### Create a Git configuration file (git-config.yaml) in the configuration folder - -For GitHub use a personal access token with `repo` write permissions. -It is also possible to use a fine-grained token. In this case, it requires read and write permissions for `Contents`. - -```yaml title="config/git-config.yaml" -auth: - basic: - username: "" - password: "" -``` - -### Run the `openmcp-bootstrapper` CLI tool to deploy FluxCD to the Platform Cluster - -Run the `openmcp-bootstrapper` CLI tool to deploy FluxCD to the `platform` Gardener Shoot cluster: - -```shell -docker run --rm -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} deploy-flux --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform.kubeconfig /config/bootstrapper-config.yaml -``` - -You should see output similar to the following: - -```shell -Info: Starting deployment of Flux controllers with config file: /config/bootstrapper-config.yaml. -Info: Ensure namespace flux-system exists -Info: Creating/updating git credentials secret flux-system/git -Info: Created/updated git credentials secret flux-system/git -Info: Creating working directory for gitops-templates -Info: Downloading templates -/tmp/openmcp.cloud.bootstrapper-3041773446/download: 9 file(s) with 691073 byte(s) written -Info: Arranging template files -Info: Arranged template files -Info: Applying templates from gitops-templates/fluxcd to deployment repository -Info: Kustomizing files in directory: /tmp/openmcp.cloud.bootstrapper-3041773446/repo/envs/dev/fluxcd -Info: Applying flux deployment objects -Info: Deployment of flux controllers completed -``` - -### Inspect the deployed FluxCD controllers and Kustomization - -Load the kubeconfig of the Kind cluster and check the deployed FluxCD controllers and the created GitRepository and Kustomization. - -```shell -kind get kubeconfig --name platform > ./kubeconfigs/platform.kubeconfig -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n flux-system -``` - -You should see output similar to the following: - -```shell -NAME READY STATUS RESTARTS AGE -helm-controller-648cdbf8d8-8jhnf 1/1 Running 0 9m37s -image-automation-controller-56df4c78dc-qwmfm 1/1 Running 0 9m35s -image-reflector-controller-56f69fcdc9-pgcgx 1/1 Running 0 9m35s -kustomize-controller-b4c4dcdc8-g49gc 1/1 Running 0 9m38s -notification-controller-59d754d599-w7fjp 1/1 Running 0 9m36s -source-controller-6b45b6464f-jbgb6 1/1 Running 0 9m38s -``` - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A -```` - -You should see output similar to the following: - -```shell -NAMESPACE NAME URL AGE READY STATUS -flux-system environments https://github.com// 86s False failed to checkout and determine revision: unable to clone 'https://github.com//': couldn't find remote ref "refs/heads/" -``` - -This error is expected as the branch does not exist yet in the Git repository. The `openmcp-bootstrapper` will create the branch in the next step. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME AGE READY STATUS -flux-system flux-system 3m15s False Source artifact not found, retrying in 30s -``` - -This error is also expected as the GitRepository does not exist yet. The `openmcp-bootstrapper` will create the GitRepository in the next step. - -### Run the `openmcp-bootstrapper` CLI tool to deploy openMCP to the Kind cluster - -Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the Gardener cluster provider and the openmcp-operator configuration. - -Please replace `` with the logical environment name (e.g. `dev`, `prod`, `live-eu-west`) that will be used in the Git repository to separate different environments. Notice that the same environment name must be used in the `environment` field and in the scheduler profiles. - -```yaml title="config/bootstrapper-config.yaml" -component: - location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: - -repository: - url: https://github.com// - pushBranch: - -environment: - -providers: - clusterProviders: - - name: gardener - -openmcpOperator: - config: - managedControlPlane: - mcpClusterPurpose: mcp-worker - reconcileMCPEveryXDays: 7 - scheduler: - scope: Cluster - purposeMappings: - mcp-worker: - template: - metadata: - namespace: openmcp-system - spec: - profile: .gardener.shoot-small - tenancy: Exclusive - platform: - template: - metadata: - namespace: openmcp-system - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: .gardener.shoot-small - tenancy: Shared - onboarding: - template: - metadata: - namespace: openmcp-system - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: .gardener.shoot-workerless - tenancy: Shared - workload: - tenancyCount: 20 - template: - metadata: - namespace: openmcp-system - spec: - profile: .gardener.shoot-small - tenancy: Shared -``` - -Create a directory named `extra-manifests` in the configuration folder. - -```shell -mkdir ./config/extra-manifests -``` - -In the `extra-manifests` folder, create a file named `gardener-landscape.yaml` with the following content: - -```yaml title="config/extra-manifests/gardener-landscape.yaml" -apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 -kind: Landscape -metadata: - name: gardener-landscape -spec: - access: - secretRef: - name: gardener-landscape-kubeconfig - namespace: openmcp-system -``` - -The gardener landscape configuration requires a secret that contains the kubeconfig to access the Gardener project. For that purpose, create a secret named `gardener-landscape-kubeconfig` in the `openmcp-system` namespace of the platform cluster that contains the kubeconfig file that has access to the Gardener installation. -See the [Gardener documentation](https://gardener.cloud/docs/dashboard/automated-resource-management/#create-a-service-account) on how to create a service account in the Gardener project using the Gardener dashboard. -Create a service account with at least the `admin` role in the Gardener project. Then [download](https://gardener.cloud/docs/dashboard/automated-resource-management/#use-the-service-account) the kubeconfig for the service account and save it to a file named `./kubeconfigs/gardener-landscape.kubeconfig`. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig create namespace openmcp-system -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig create secret generic gardener-landscape-kubeconfig --from-file=kubeconfig=./kubeconfigs/gardener-landscape.kubeconfig -n openmcp-system -``` - -The kubeconfig content can be retrieved from the Gardener dashboard or by creating a service account in the Gardener project. See the [Gardener documentation](https://gardener.cloud/docs/getting-started/project/#service-accounts) for more information on how to create a service account. -The service account requires at least the `admin` role in the Gardener project. - -In the `extra-manifests` folder, create a file named `gardener-cluster-provider-shoot-small.yaml` with the following content: - - - - -```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-small.yaml" -apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 -kind: ProviderConfig -metadata: - name: shoot-small -spec: - landscapeRef: - name: gardener-landscape - project: - providerRef: - name: gardener - shootTemplate: - spec: - cloudProfile: - kind: CloudProfile - name: gcp - kubernetes: - version: "" # e.g. "1.32" - maintenance: - autoUpdate: - kubernetesVersion: true - timeWindow: - begin: 220000+0200 - end: 230000+0200 - networking: - nodes: 10.180.0.0/16 - type: calico - provider: - controlPlaneConfig: - apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1 - kind: ControlPlaneConfig - zone: # e.g. europe-west1-c - infrastructureConfig: - apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1 - kind: InfrastructureConfig - networks: - workers: 10.180.0.0/16 - type: gcp - workers: - - cri: - name: containerd - machine: - architecture: amd64 - image: - name: gardenlinux - version: "" # e.g. "1592.9.0" - type: n1-standard-2 - maxSurge: 1 - maximum: 5 - minimum: 1 - name: default-worker - volume: - size: 50Gi - type: pd-balanced - zones: - - # e.g. europe-west1-c - purpose: evaluation - region: # e.g. europe-west1 - secretBindingName: -``` - - - -```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-small.yaml" -apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 -kind: ProviderConfig -metadata: - name: shoot-small -spec: - landscapeRef: - name: gardener-landscape - project: - providerRef: - name: gardener - shootTemplate: - spec: - cloudProfile: - kind: CloudProfile - name: aws - kubernetes: - version: "" # e.g. "1.32" - maintenance: - autoUpdate: - kubernetesVersion: true - timeWindow: - begin: 220000+0200 - end: 230000+0200 - networking: - type: calico - nodes: 10.180.0.0/16 - provider: - controlPlaneConfig: - apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1 - kind: ControlPlaneConfig - cloudControllerManager: - useCustomRouteController: true - storage: - managedDefaultClass: true - infrastructureConfig: - apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1 - kind: InfrastructureConfig - networks: - vpc: - cidr: 10.180.0.0/16 - zones: - - name: # e.g. eu-west-1a - workers: 10.180.0.0/19 - public: 10.180.32.0/20 - internal: 10.180.48.0/20 - type: aws - workers: - - cri: - name: containerd - machine: - architecture: amd64 - image: - name: gardenlinux - version: "" # e.g. "1592.9.0" - type: m5.large - maxSurge: 1 - maximum: 5 - minimum: 1 - name: default-worker - volume: - size: 50Gi - type: gp3 - zones: - - # e.g. eu-west-1a - purpose: evaluation - region: # e.g. eu-west-1 - secretBindingName: -``` - - - - -In the `extra-manifests` folder, create a file named `gardener-cluster-provider-shoot-workerless.yaml` with the following content: - - - - -```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-workerless.yaml" -apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 -kind: ProviderConfig -metadata: - name: shoot-workerless -spec: - landscapeRef: - name: gardener-landscape - project: - providerRef: - name: gardener - shootTemplate: - spec: - cloudProfile: - kind: CloudProfile - name: gcp - kubernetes: - version: "" # e.g. "1.32" - maintenance: - autoUpdate: - kubernetesVersion: true - timeWindow: - begin: 220000+0200 - end: 230000+0200 - provider: - type: gcp - purpose: evaluation - region: # eg europe-west1 -``` - - - -```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-workerless.yaml" -apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 -kind: ProviderConfig -metadata: - name: shoot-workerless -spec: - landscapeRef: - name: gardener-landscape - project: - providerRef: - name: gardener - shootTemplate: - spec: - cloudProfile: - kind: CloudProfile - name: aws - kubernetes: - version: "" # e.g. "1.32" - maintenance: - autoUpdate: - kubernetesVersion: true - timeWindow: - begin: 220000+0200 - end: 230000+0200 - provider: - type: aws - purpose: evaluation - region: # e.g. eu-west-1 -``` - - - - -Replace `` with the name of your Gardener project and `` with the name of the secret binding that contains the infrastructure secret for your Gardener project. - -Replace also `` with the desired Kubernetes version (e.g. `1.32`), `` with the desired Garden Linux version (e.g. `1592.9.0`), `` with the desired region (e.g. `europe-west1`), and `` with the desired zone (e.g. `europe-west1-c`). - -:::info -Please adjust the shoot configuration based on your specific needs, e.g. change `Evaluation` to `Production` as purpose, if you are planning to use the MCP for productive purposes. For all the details reg. Shoot configuration, please consult the respective Gardener documentation. -::: - -Now run the `openmcp-bootstrapper` CLI tool to update the Git repository and deploy openMCP to the `platform` Gardener Shoot cluster: - -```shell -docker run --rm -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform.kubeconfig --extra-manifest-dir /config/extra-manifests /config/bootstrapper-config.yaml -``` - -You should see output similar to the following: - -```shell -Info: Downloading component ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp:v0.0.25 -Info: Creating template transformer -Info: Downloading template resources -/tmp/openmcp.cloud.bootstrapper-245193548/transformer/download/fluxcd: 9 file(s) with 691073 byte(s) written -/tmp/openmcp.cloud.bootstrapper-245193548/transformer/download/openmcp: 8 file(s) with 6625 byte(s) written -Info: Transforming templates into deployment repository structure -Info: Fetching openmcp-operator component version -Info: Cloning deployment repository https://github.com/reshnm/openmcp-deployment -Info: Checking out or creating branch gardener -Info: Applying templates from "gitops-templates/fluxcd"/"gitops-templates/openmcp" to deployment repository -Info: Templating providers: clusterProviders=[{gardener [] map[]}], serviceProviders=[], platformServices=[], imagePullSecrets=[] -Info: Applying Custom Resource Definitions to deployment repository -/tmp/openmcp.cloud.bootstrapper-245193548/repo/resources/openmcp/crds: 8 file(s) with 484832 byte(s) written -/tmp/openmcp.cloud.bootstrapper-245193548/repo/resources/openmcp/crds: 3 file(s) with 198428 byte(s) written -Info: Applying extra manifests from /config/extra-manifests to deployment repository -Info: Committing and pushing changes to deployment repository -Info: Created commit: ee2b6ef079808fbc198b4f6eced1afb89f64d1d1 -Info: Running kustomize on /tmp/openmcp.cloud.bootstrapper-245193548/repo/envs/dev -Info: Applying Kustomization manifest: default/bootstrap -``` - -### Inspect the Git repository - -The desired state of the openMCP landscape has now been created in the Git repository and should look similar to the following structure: - -```shell -. -├── envs -│   └── dev -│   ├── fluxcd -│   │   ├── flux-kustomization.yaml -│   │   ├── gitrepo.yaml -│   │   └── kustomization.yaml -│   ├── kustomization.yaml -│   ├── openmcp -│   │   ├── config -│   │   │   └── openmcp-operator-config.yaml -│   │   └── kustomization.yaml -│   └── root-kustomization.yaml -└── resources - ├── fluxcd - │   ├── components.yaml - │   ├── flux-kustomization.yaml - │   ├── gitrepo.yaml - │   └── kustomization.yaml - ├── kustomization.yaml - ├── openmcp - │   ├── cluster-providers - │   │   └── gardener.yaml - │   ├── crds - │   │   ├── clusters.openmcp.cloud_accessrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusterprofiles.yaml - │   │   ├── clusters.openmcp.cloud_clusterrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusters.yaml - │   │   ├── gardener.clusters.openmcp.cloud_clusterconfigs.yaml - │   │   ├── gardener.clusters.openmcp.cloud_landscapes.yaml - │   │   ├── gardener.clusters.openmcp.cloud_providerconfigs.yaml - │   │   ├── openmcp.cloud_clusterproviders.yaml - │   │   ├── openmcp.cloud_platformservices.yaml - │   │   └── openmcp.cloud_serviceproviders.yaml - │   ├── deployment.yaml - │   ├── extra - │   │   ├── gardener-cluster-provider-shoot-small.yaml - │   │   ├── gardener-cluster-provider-shoot-workerless.yaml - │   │   └── gardener-landscape.yaml - │   ├── kustomization.yaml - │   ├── namespace.yaml - │   └── rbac.yaml - └── root-kustomization.yaml -``` - -The `envs/` folder contains the Kustomization files that are used by FluxCD to deploy openMCP to the platform cluster. -The `resources` folder contains the base resources that are used by the Kustomization files in the `envs/` folder. - -## Inspect the Kustomizations in the platform cluster - -Force an update of the GitRepository and Kustomization in the Kind cluster to pick up the changes made in the Git repository. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system patch kustomization flux-system --type merge -p '{"spec":{"force":true}}' -``` - -Get the status of the GitRepository in the platform cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME URL AGE READY STATUS -flux-system environments https://github.com// 9m6s True stored artifact for revision 'docs@sha1:...' -``` - -So we have now successfully configured FluxCD to watch for changes in the specified GitHub repository, using the `environments` custom resource of kind `GitRepository`. -Now let's get the status of the Kustomization in the Kind cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME AGE READY STATUS -default bootstrap 5m31s True Applied revision: docs@sha1:... -flux-system flux-system 10m True Applied revision: docs@sha1:... -``` - -You can see that there are now two Kustomizations in the platform cluster. -The `flux-system` Kustomization is used to deploy the FluxCD controllers and the `bootstrap` Kustomization is used to deploy openMCP to the platform cluster. - -### Inspect the deployed openMCP components on the platform cluster - -Now check the deployed openMCP components. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system -``` - -You should see output similar to the following: - -```shell -NAME READY STATUS RESTARTS AGE -cp-gardener-7f77684ffb-gw4jg 1/1 Running 0 35m -cp-gardener-init-wxnt4 0/1 Completed 0 35m -openmcp-operator-785b967f66-h2dlh 1/1 Running 0 67m -ps-managedcontrolplane-5b77749f7b-mtffp 1/1 Running 0 64m -ps-managedcontrolplane-init-pklrl 0/1 Completed 0 67m -``` - -So now, the openmcp-operator, the managedcontrolplane platform service and the cluster provider gardener are running. -You are now ready to create and manage clusters using openMCP. - -### Inspect cluster profiles and clusters - -Based on the provider configuration for the Gardener cluster provider, two cluster profiles should have been created: `dev.gardener.shoot-small` and `dev.gardener.shoot-workerless`. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusterprofiles.clusters.openmcp.cloud -``` - -You should see output similar to the following: - -```shell -NAME PROVIDER CONFIG -dev.gardener.shoot-small gardener shoot-small -dev.gardener.shoot-workerless gardener shoot-workerless -``` - -As you can see, these names match the profile names used in the openmcp-operator configuration. The nameing convention is `..`. - -Inspecting a cluster profile, shows the supported kubernetes versions: - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusterprofiles.clusters.openmcp.cloud dev.gardener.shoot-small -o yaml -``` - -You should see output similar to the following: - -```yaml -apiVersion: clusters.openmcp.cloud/v1alpha1 -kind: ClusterProfile -metadata: - creationTimestamp: "2025-10-01T06:38:48Z" - generation: 1 - name: dev.gardener.shoot-small - resourceVersion: "173288" - uid: 926aa91c-f021-41f7-b97c-dc7eaf0e19bf -spec: - providerConfigRef: - name: shoot-small - providerRef: - name: gardener - supportedVersions: - - version: 1.33.3 - - deprecated: true - version: 1.33.2 - - version: 1.32.7 - - deprecated: true - version: 1.32.6 - - deprecated: true - version: 1.32.5 - - deprecated: true - version: 1.32.4 - - deprecated: true - version: 1.32.3 - - deprecated: true - version: 1.32.2 - - version: 1.31.11 - - deprecated: true - version: 1.31.10 - - deprecated: true - version: 1.31.9 - - deprecated: true - version: 1.31.8 - - deprecated: true - version: 1.31.7 - - deprecated: true - version: 1.31.6 - - deprecated: true - version: 1.31.5 - - deprecated: true - version: 1.31.4 - - deprecated: true - version: 1.31.3 - - deprecated: true - version: 1.31.2 - - version: 1.30.14 - - deprecated: true - version: 1.30.13 - - deprecated: true - version: 1.30.12 - - deprecated: true - version: 1.30.11 - - deprecated: true - version: 1.30.10 - - deprecated: true - version: 1.30.9 - - deprecated: true - version: 1.30.8 - - deprecated: true - version: 1.30.7 - - deprecated: true - version: 1.30.6 - - deprecated: true - version: 1.30.5 - - deprecated: true - version: 1.30.4 - - deprecated: true - version: 1.30.3 - - deprecated: true - version: 1.30.2 - - deprecated: true - version: 1.30.1 -``` - -You can also see the onboarding cluster that has been created by the openmcp-operator. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusters.clusters.openmcp.cloud -A -``` - -You should see output similar to the following: - -```shell -NAMESPACE NAME PURPOSES PHASE VERSION PROVIDER AGE -openmcp-system onboarding ["onboarding"] Ready 1.32.7 gardener 30m -``` - -You can also get the shoot name of the onboarding cluster: - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusters.clusters.openmcp.cloud --namespace openmcp-system onboarding -o jsonpath="{.status.providerStatus.shoot.metadata.name}" -``` - -You should see output similar to the following: - -```shell -s-hl4uutd4 -``` - -If you want, you can inspect the Gardener shoot in your Gardener project. - -### Get Access to the Onboarding Cluster - -In order to create resources on the onboarding cluster, you need to get access to the onboarding cluster. -To do so, create an access request that grants admin permissions on the onboarding cluster. - -Create a file named `onboarding-access-request.yaml` in the configuration folder with the following content: - -```yam title="config/onboarding-access-request.yaml" -apiVersion: clusters.openmcp.cloud/v1alpha1 -kind: AccessRequest -metadata: - name: bootstrapper-onboarding - namespace: openmcp-system -spec: - clusterRef: - name: onboarding - namespace: openmcp-system - token: - permissions: - - rules: - - apiGroups: - - '*' - resources: - - '*' - verbs: - - '*' -``` - -Then apply the file to the platform cluster: - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig apply -f ./config/onboarding-access-request.yaml -``` - -You can check the status of the access request using the following command: - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get accessrequests.clusters.openmcp.cloud --namespace openmcp-system bootstrapper-onboarding -``` - -Once the access request has been granted, you should see output similar to the following: - -```shell -NAME PHASE -bootstrapper-onboarding Granted -``` - -Now you can get the kubeconfig of the onboarding cluster using the following command: - -```shell -SECRET_NAME=$(kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get accessrequests.clusters.openmcp.cloud --namespace openmcp-system bootstrapper-onboarding -o jsonpath="{.status.secretRef.name}") -SECRET_NAMESPACE=$(kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get accessrequests.clusters.openmcp.cloud --namespace openmcp-system bootstrapper-onboarding -o jsonpath="{.status.secretRef.namespace}") -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get secret ${SECRET_NAME} -n ${SECRET_NAMESPACE} -o jsonpath="{.data.kubeconfig}" | base64 -d > ./kubeconfigs/onboarding.kubeconfig -``` - -### Create a Managed Control Plane on the Onboarding Cluster - -Create a file named `my-mcp.yaml` with the following content in the configuration folder: - -```yaml title="config/my-mcp.yaml" -apiVersion: core.openmcp.cloud/v2alpha1 -kind: ManagedControlPlaneV2 -metadata: - name: my-mcp - namespace: default -spec: - iam: - tokens: - - name: admin - roleRefs: - - kind: ClusterRole - name: cluster-admin -``` - -Apply the file to the onboarding cluster: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig apply -f ./config/my-mcp.yaml -``` - -The openmcp-operator should start to create the necessary resources in order to create the managed control plane. As a result, a new `Managed Control Plane` should be available soon. -You can check the status of the Managed Control Plane using the following command: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get managedcontrolplanev2 -n default my-mcp -o yaml -``` - -After some time (this can take about 10 to 15 minutes), you should see output similar to the following: - -```yaml -apiVersion: core.openmcp.cloud/v2alpha1 -kind: ManagedControlPlaneV2 -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"core.openmcp.cloud/v2alpha1","kind":"ManagedControlPlaneV2","metadata":{"annotations":{},"name":"my-mcp","namespace":"default"},"spec":{"iam":{"tokens":[{"name":"admin","roleRefs":[{"kind":"ClusterRole","name":"cluster-admin"}]}]}}} - creationTimestamp: "2025-10-01T11:02:29Z" - finalizers: - - core.openmcp.cloud/mcp - - request.clusters.openmcp.cloud/my-mcp - generation: 1 - name: my-mcp - namespace: default - resourceVersion: "32021" - uid: acd0ce65-df78-4667-8b9c-540843a43294 -spec: - iam: - tokens: - - name: admin - roleRefs: - - kind: ClusterRole - name: cluster-admin -status: - access: - token_admin: - name: zmr7k5u7 - conditions: - - lastTransitionTime: "2025-10-01T11:06:35Z" - message: "" - observedGeneration: 1 - reason: AccessReady:token_admin_True - status: "True" - type: AccessReady.token_admin - - lastTransitionTime: "2025-10-01T11:06:35Z" - message: All accesses are ready - observedGeneration: 1 - reason: AllAccessReady_True - status: "True" - type: AllAccessReady - - lastTransitionTime: "2025-10-01T11:02:34Z" - message: "" - observedGeneration: 1 - reason: ClusterConfigurations_True - status: "True" - type: Cluster.ClusterConfigurations - - lastTransitionTime: "2025-10-01T11:06:35Z" - message: API server /healthz endpoint responded with success status code. - observedGeneration: 1 - reason: HealthzRequestSucceeded - status: "True" - type: Cluster.Gardener_APIServerAvailable - - lastTransitionTime: "2025-10-01T11:21:55Z" - message: All control plane components are healthy. - observedGeneration: 1 - reason: ControlPlaneRunning - status: "True" - type: Cluster.Gardener_ControlPlaneHealthy - - lastTransitionTime: "2025-10-01T11:21:55Z" - message: All nodes are ready. - observedGeneration: 1 - reason: EveryNodeReady - status: "True" - type: Cluster.Gardener_EveryNodeReady - - lastTransitionTime: "2025-10-01T11:21:55Z" - message: All observability components are healthy. - observedGeneration: 1 - reason: ObservabilityComponentsRunning - status: "True" - type: Cluster.Gardener_ObservabilityComponentsHealthy - - lastTransitionTime: "2025-10-01T11:21:55Z" - message: All system components are healthy. - observedGeneration: 1 - reason: SystemComponentsRunning - status: "True" - type: Cluster.Gardener_SystemComponentsHealthy - - lastTransitionTime: "2025-10-01T11:02:34Z" - message: "" - observedGeneration: 1 - reason: LandscapeManagement_True - status: "True" - type: Cluster.LandscapeManagement - - lastTransitionTime: "2025-10-01T11:02:34Z" - message: "" - observedGeneration: 1 - reason: Meta_True - status: "True" - type: Cluster.Meta - - lastTransitionTime: "2025-10-01T11:02:34Z" - message: "" - observedGeneration: 1 - reason: ShootManagement_True - status: "True" - type: Cluster.ShootManagement - - lastTransitionTime: "2025-10-01T11:02:34Z" - message: Cluster conditions have been synced to MCP - observedGeneration: 1 - reason: ClusterConditionsSynced_True - status: "True" - type: ClusterConditionsSynced - - lastTransitionTime: "2025-10-01T11:02:34Z" - message: ClusterRequest is ready - observedGeneration: 1 - reason: ClusterRequestReady_True - status: "True" - type: ClusterRequestReady - - lastTransitionTime: "2025-10-01T11:02:29Z" - message: "" - observedGeneration: 1 - reason: Meta_True - status: "True" - type: Meta - observedGeneration: 1 - phase: Ready -``` - -The `status.phase` should be `Ready` and the `AllAccessReady` condition should be `True`. - -You can now get the kubeconfig of the managed control plane using the following command: - -```shell -TOKEN_NAME=$(kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get managedcontrolplanev2 -n default my-mcp -o jsonpath="{.status.access.token_admin.name}") -TOKEN_NAMESPACE=$(kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get managedcontrolplanev2 -n default my-mcp -o jsonpath="{.metadata.namespace}") -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get secret ${TOKEN_NAME} -n ${TOKEN_NAMESPACE} -o jsonpath="{.data.kubeconfig}" | base64 -d > ./kubeconfigs/my-mcp.kubeconfig -``` - -### Deploy the Crossplane Service Provider on the platform cluster - -Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the crossplane service provider. - -```yaml title="config/bootstrapper-config.yaml" -component: - location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: - -repository: - url: https://github.com// - pushBranch: - -environment: - -providers: - clusterProviders: - - name: gardener - serviceProviders: - - name: crossplane - -openmcpOperator: - config: - managedControlPlane: - mcpClusterPurpose: mcp-worker - reconcileMCPEveryXDays: 7 - scheduler: - scope: Cluster - purposeMappings: - mcp-worker: - template: - metadata: - namespace: openmcp-system - spec: - profile: .gardener.shoot-small - tenancy: Exclusive - platform: - template: - metadata: - namespace: openmcp-system - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: .gardener.shoot-small - tenancy: Shared - onboarding: - template: - metadata: - namespace: openmcp-system - labels: - clusters.openmcp.cloud/delete-without-requests: "false" - spec: - profile: .gardener.shoot-workerless - tenancy: Shared - workload: - tenancyCount: 20 - template: - metadata: - namespace: openmcp-system - spec: - profile: .gardener.shoot-small - tenancy: Shared -``` - -Then create a file named `crossplane-provider.yaml` with the following content, and save it in the new `extra-manifests` folder. - -:::info -Note that service provider crossplane only supports the installation of crossplane from an OCI registry. Replace the chart locations in the `ProviderConfig` with the OCI registry where you mirror your crossplane chart versions. OpenMCP will provide this as part of an open source [Releasechannel](https://github.com/openmcp-project/backlog/issues/323) in an upcoming update. -::: - -```yaml title="config/extra-manifests/crossplane-provider.yaml" -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: ProviderConfig -metadata: - name: default -spec: - versions: - - version: v2.0.2 - chart: - url: ghcr.io/openmcp-project/charts/crossplane:2.0.2 - image: - url: xpkg.crossplane.io/crossplane/crossplane:v2.0.2 - - version: v1.20.1 - chart: - url: ghcr.io/openmcp-project/charts/crossplane:1.20.1 - image: - url: xpkg.crossplane.io/crossplane/crossplane:v1.20.1 - providers: - availableProviders: - - name: provider-kubernetes - package: xpkg.upbound.io/upbound/provider-kubernetes - versions: - - v0.16.0 -``` - -Run the `openmcp-bootstrapper` CLI tool to update the Git repository and deploy the crossplane service provider to the Shoot cluster. - -```shell -docker run --rm -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform.kubeconfig --extra-manifest-dir /config/extra-manifests /config/bootstrapper-config.yaml -``` - -See the `--extra-manifest-dir` parameter that points to the folder containing the extra manifest file created in the previous step. All manifest files in this folder will be added to the Kustomization used by FluxCD to deploy openMCP to the Shoot cluster. - -The git repository should now be updated: - -```shell -. -├── envs -│   └── dev -│   ├── fluxcd -│   │   ├── flux-kustomization.yaml -│   │   ├── gitrepo.yaml -│   │   └── kustomization.yaml -│   ├── kustomization.yaml -│   ├── openmcp -│   │   ├── config -│   │   │   └── openmcp-operator-config.yaml -│   │   └── kustomization.yaml -│   └── root-kustomization.yaml -└── resources - ├── fluxcd - │   ├── components.yaml - │   ├── flux-kustomization.yaml - │   ├── gitrepo.yaml - │   └── kustomization.yaml - ├── kustomization.yaml - ├── openmcp - │   ├── cluster-providers - │   │   └── gardener.yaml - │   ├── crds - │   │   ├── clusters.openmcp.cloud_accessrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusterprofiles.yaml - │   │   ├── clusters.openmcp.cloud_clusterrequests.yaml - │   │   ├── clusters.openmcp.cloud_clusters.yaml - │   │   ├── crossplane.services.openmcp.cloud_providerconfigs.yaml - │   │   ├── gardener.clusters.openmcp.cloud_clusterconfigs.yaml - │   │   ├── gardener.clusters.openmcp.cloud_landscapes.yaml - │   │   ├── gardener.clusters.openmcp.cloud_providerconfigs.yaml - │   │   ├── openmcp.cloud_clusterproviders.yaml - │   │   ├── openmcp.cloud_platformservices.yaml - │   │   └── openmcp.cloud_serviceproviders.yaml - │   ├── deployment.yaml - │   ├── extra - │   │   ├── crossplane-provider.yaml - │   │   ├── gardener-cluster-provider-shoot-small.yaml - │   │   ├── gardener-cluster-provider-shoot-workerless.yaml - │   │   └── gardener-landscape.yaml - │   ├── kustomization.yaml - │   ├── namespace.yaml - │   ├── rbac.yaml - │   └── service-providers - │   └── crossplane.yaml - └── root-kustomization.yaml -``` - -After a while, the Kustomization in the platform cluster should be updated and the crossplane service provider should be deployed: -You can force an update of the Kustomization in the platform cluster to pick up the changes made in the Git repository. - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n default patch kustomization bootstrap --type merge -p '{"spec":{"force":true}}' -``` - -List the pods in the `openmcp-system` namespace again: - -```shell -kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system -```` - -You should see output similar to the following: - -```shell -NAME READY STATUS RESTARTS AGE -cp-gardener-84b7ff4c9c-vf2sc 1/1 Running 0 3m3s -cp-gardener-init-xr7fs 0/1 Completed 0 3m7s -openmcp-operator-785b967f66-h2dlh 1/1 Running 0 74m -ps-managedcontrolplane-5b77749f7b-mtffp 1/1 Running 0 71m -ps-managedcontrolplane-init-pklrl 0/1 Completed 0 74m -``` - -You should see that the crossplane service provider is running. This means that from now on, the openMCP is able to provide Crossplane service instances, using the new service provider Crossplane. - -### Create a Crossplane service instance on the onboarding cluster - -Create a file named `crossplane-instance.yaml` with the following content in the configuration folder: - -```yaml title="config/crossplane-instance.yaml" -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: Crossplane -metadata: - name: my-mcp - namespace: default -spec: - version: v1.20.0 - providers: - - name: provider-kubernetes - version: v0.16.0 -``` - -Apply the file to onboarding cluster: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig apply -f ./config/crossplane-instance.yaml -``` - -The Crossplane service provider should now start to create the necessary resources for the new Crossplane instance. As a result, a new Crossplane service instance should soon be available. -You can check the status of the Crossplane instance using the following command: - -```shell -kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get crossplane -n default my-mcp -o yaml -``` - -After a while, you should see output similar to the following: - -```yaml -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: Crossplane -metadata: - finalizers: - - openmcp.cloud/finalizers - generation: 1 - name: sample - namespace: default -spec: - providers: - - name: provider-kubernetes - version: v0.16.0 - version: v1.20.0 -status: - conditions: - - lastTransitionTime: "2025-09-16T14:09:56Z" - message: Crossplane is healthy. - reason: Healthy - status: "True" - type: CrossplaneReady - - lastTransitionTime: "2025-09-16T14:10:01Z" - message: ProviderKubernetes is healthy. - reason: Healthy - status: "True" - type: ProviderKubernetesReady - observedGeneration: 0 - phase: "" -``` - -Crossplane and the provider Kubernetes should now be available on the mcp cluster. - -```shell -kubectl --kubeconfig ./kubeconfigs/my-mcp.kubeconfig api-resources | grep 'crossplane\|kubernetes' -``` diff --git a/docs/operators/01-kind-provider.md b/docs/operators/01-kind-provider.md new file mode 100644 index 0000000..28171cb --- /dev/null +++ b/docs/operators/01-kind-provider.md @@ -0,0 +1,351 @@ +--- +sidebar_position: 1 +--- + +import Tabs from '@theme/Tabs'; +import CodeBlock from '@theme/CodeBlock'; +import TabItem from '@theme/TabItem'; + +# Dev: Run on Kind + +## Requirements + +* [Docker](https://docs.docker.com/get-docker/) installed and running. Docker alternatively can be replaced with another OCI runtime (e.g. Podman) that can run the `openmcp-bootstrapper` CLI tool as an OCI image. +* [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/) installed + +:::info +If you are using a docker alternative, make sure that it is correctly setup regarding Docker compatibility. In case of Podman, you should find a corresponding configuration under `Settings` in the Podman UI. +::: + +## Create a configuration folder + +Create a directory that will be used to store the configuration files and the kubeconfig files. +To keep this example simple, we will use a single directory named `config` in the current working directory. + +```shell +mkdir config +``` + +All following examples will use the `config` directory as the configuration directory. If you use a different directory, replace all occurrences of `config` with your desired directory path. + +Create a directory named `kubeconfigs` in the configuration folder to store the kubeconfig files of the created clusters. + +```shell +mkdir kubeconfigs +``` + +## Create the Kind configuration file (kind-config.yaml) in the configuration folder + +```yaml +apiVersion: kind.x-k8s.io/v1alpha4 +kind: Cluster +nodes: +- role: control-plane + extraMounts: + - hostPath: /var/run/docker.sock + containerPath: /var/run/host-docker.sock +``` + +## Create the Kind cluster + +Create the Kind cluster using the configuration file created in the previous step. + +:::warning + +Please check if your current `kind` network has a `/16` subnet. This is required for our cluster-provider-kind. +You can check the current network configuration using: + +```shell +docker network inspect kind | jq ".[].IPAM.Config.[].Subnet" +"172.19.0.0/16" +``` + +If the result is not specifying `/16` but something smaller like `/24` you need to delete the network and create a new one. For that **all kind clusters needs to be deleted**. Then run: + +```shell +docker network rm kind + +docker network create kind --subnet 172.19.0.0/16 +``` + +::: + +:::info Podman Support +In case you are using Podman instead of Docker, it is currently required to first create a suitable network for the Kind cluster by executing the following command before creating the Kind cluster itself. + +```shell +podman network create kind --subnet 172.19.0.0/16 +``` + +::: + +```shell +kind create cluster --name platform --config ./config/kind-config.yaml +``` + +Export the internal kubeconfig of the Kind cluster to a file named `platform-int.kubeconfig` in the configuration folder. + +```shell +kind get kubeconfig --internal --name platform > ./kubeconfigs/platform-int.kubeconfig +``` + +## Create a bootstrapping configuration file (bootstrapper-config.yaml) in the configuration folder + +Replace `` and `` with your Git organization and repository name. +The environment can be set to the logical environment name (e.g. `dev`, `prod`, `live-eu-west`) that will be used in the Git repository to separate different environments. +The branch can be set to the desired branch name in the Git repository that will be used to store the desired state of the openMCP landscape. + +Get the latest version of the `github.com/openmcp-project/openmcp` root component: + +```shell +TAG=$(curl -s "https://api.github.com/repos/openmcp-project/openmcp/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) +echo "${TAG}" +``` + +In the bootstrapper configuration, replace `` with the latest version of the `github.com/openmcp/openmcp` root component: + +```yaml title="config/bootstrapper-config.yaml" +component: + location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: + +repository: + url: https://github.com// + pushBranch: + +environment: + +openmcpOperator: + config: {} +``` + +## Create a Git configuration file (git-config.yaml) in the configuration folder + +For GitHub use a personal access token with `repo` write permissions. +It is also possible to use a fine-grained token. In this case, it requires read and write permissions for `Contents`. + +```yaml title="config/git-config.yaml" +auth: + basic: + username: "" + password: "" +``` + +## Run the `openmcp-bootstrapper` CLI tool and deploy FluxCD to the Kind cluster + +```shell +docker run --rm --network kind -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} deploy-flux --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform-int.kubeconfig /config/bootstrapper-config.yaml +``` + +You should see output similar to the following: + +```shell +Info: Starting deployment of Flux controllers with config file: /config/bootstrapper-config.yaml. +Info: Ensure namespace flux-system exists +Info: Creating/updating git credentials secret flux-system/git +Info: Created/updated git credentials secret flux-system/git +Info: Creating working directory for gitops-templates +Info: Downloading templates +/tmp/openmcp.cloud.bootstrapper-3041773446/download: 9 file(s) with 691073 byte(s) written +Info: Arranging template files +Info: Arranged template files +Info: Applying templates from gitops-templates/fluxcd to deployment repository +Info: Kustomizing files in directory: /tmp/openmcp.cloud.bootstrapper-3041773446/repo/envs/dev/fluxcd +Info: Applying flux deployment objects +Info: Deployment of flux controllers completed +``` + +## Inspect the deployed FluxCD controllers and Kustomization + +Load the kubeconfig of the Kind cluster and check the deployed FluxCD controllers and the created GitRepository and Kustomization. + +```shell +kind get kubeconfig --name platform > ./kubeconfigs/platform.kubeconfig +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n flux-system +``` + +You should see output similar to the following: + +```shell +NAME READY STATUS RESTARTS AGE +helm-controller-648cdbf8d8-8jhnf 1/1 Running 0 9m37s +image-automation-controller-56df4c78dc-qwmfm 1/1 Running 0 9m35s +image-reflector-controller-56f69fcdc9-pgcgx 1/1 Running 0 9m35s +kustomize-controller-b4c4dcdc8-g49gc 1/1 Running 0 9m38s +notification-controller-59d754d599-w7fjp 1/1 Running 0 9m36s +source-controller-6b45b6464f-jbgb6 1/1 Running 0 9m38 +``` + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A +```` + +You should see output similar to the following: + +```shell +NAMESPACE NAME URL AGE READY STATUS +flux-system environments https://github.com// 86s False failed to checkout and determine revision: unable to clone 'https://github.com//': couldn't find remote ref "refs/heads/" +``` + +This error is expected as the branch does not exist yet in the Git repository. The `openmcp-bootstrapper` will create the branch in the next step. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A +``` + +You should see output similar to the following: + +```shell +NAMESPACE NAME AGE READY STATUS +flux-system flux-system 3m15s False Source artifact not found, retrying in 30s +``` + +This error is also expected as the GitRepository does not exist yet. The `openmcp-bootstrapper` will create the GitRepository in the next step. + +## Run the `openmcp-bootstrapper` CLI tool to deploy openMCP to the Kind cluster + +Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the kind cluster provider and the openmcp-operator configuration. + +```yaml title="config/bootstrapper-config.yaml" +component: + location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: + +repository: + url: https://github.com// + pushBranch: + +environment: + +providers: + clusterProviders: + - name: kind + config: + extraVolumeMounts: + - mountPath: /var/run/docker.sock + name: docker + extraVolumes: + - name: docker + hostPath: + path: /var/run/host-docker.sock + type: Socket + +openmcpOperator: + config: + managedControlPlane: + mcpClusterPurpose: mcp-worker + reconcileMCPEveryXDays: 7 + scheduler: + scope: Cluster + purposeMappings: + mcp: + template: + spec: + profile: kind + tenancy: Exclusive + mcp-worker: + template: + spec: + profile: kind + tenancy: Exclusive + platform: + template: + metadata: + labels: + clusters.openmcp.cloud/delete-without-requests: "false" + spec: + profile: kind + tenancy: Shared + onboarding: + template: + metadata: + labels: + clusters.openmcp.cloud/delete-without-requests: "false" + spec: + profile: kind + tenancy: Shared + workload: + tenancyCount: 20 + template: + spec: + profile: kind + tenancy: Shared +``` + +```shell +docker run --rm --network kind -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform-int.kubeconfig /config/bootstrapper-config.yaml +``` + +You should see output similar to the following: + +```shell +Info: Downloading component ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp:v0.0.20 +Info: Creating template transformer +Info: Downloading template resources +/tmp/openmcp.cloud.bootstrapper-2402093624/transformer/download/fluxcd: 9 file(s) with 691073 byte(s) written +/tmp/openmcp.cloud.bootstrapper-2402093624/transformer/download/openmcp: 8 file(s) with 6625 byte(s) written +Info: Transforming templates into deployment repository structure +Info: Fetching openmcp-operator component version +Info: Cloning deployment repository https://github.com/reshnm/template-test +Info: Checking out or creating branch kind +Info: Applying templates from "gitops-templates/fluxcd"/"gitops-templates/openmcp" to deployment repository +Info: Templating providers: clusterProviders=[{kind [123 34 101 120 116 114 97 86 111 108 117 109 101 77 111 117 110 116 115 34 58 91 123 34 109 111 117 110 116 80 97 116 104 34 58 34 47 118 97 114 47 114 117 110 47 100 111 99 107 101 114 46 115 111 99 107 34 44 34 110 97 109 101 34 58 34 100 111 99 107 101 114 34 125 93 44 34 101 120 116 114 97 86 111 108 117 109 101 115 34 58 91 123 34 104 111 115 116 80 97 116 104 34 58 123 34 112 97 116 104 34 58 34 47 118 97 114 47 114 117 110 47 104 111 115 116 45 100 111 99 107 101 114 46 115 111 99 107 34 44 34 116 121 112 101 34 58 34 83 111 99 107 101 116 34 125 44 34 110 97 109 101 34 58 34 100 111 99 107 101 114 34 125 93 44 34 118 101 114 98 111 115 105 116 121 34 58 34 100 101 98 117 103 34 125] map[extraVolumeMounts:[map[mountPath:/var/run/docker.sock name:docker]] extraVolumes:[map[hostPath:map[path:/var/run/host-docker.sock type:Socket] name:docker]] verbosity:debug]}], serviceProviders=[], platformServices=[], imagePullSecrets=[] +Info: Applying Custom Resource Definitions to deployment repository +/tmp/openmcp.cloud.bootstrapper-2402093624/repo/resources/openmcp/crds: 8 file(s) with 475468 byte(s) written +/tmp/openmcp.cloud.bootstrapper-2402093624/repo/resources/openmcp/crds: 1 file(s) with 1843 byte(s) written +Info: No extra manifest directory specified, skipping +Info: Committing and pushing changes to deployment repository +Info: Created commit: 287f9e88b905371bba412b5d0286ad02db0f4aac +Info: Running kustomize on /tmp/openmcp.cloud.bootstrapper-2402093624/repo/envs/dev +Info: Applying Kustomization manifest: default/bootstrap + +``` + +## Inspect the Git repository + +The desired state of the openMCP landscape has now been created in the Git repository and should look similar to the following structure: + +```shell +. +├── envs +│ └── dev +│ ├── fluxcd +│ │ ├── flux-kustomization.yaml +│ │ ├── gitrepo.yaml +│ │ └── kustomization.yaml +│ ├── kustomization.yaml +│ ├── openmcp +│ │ ├── config +│ │ │ └── openmcp-operator-config.yaml +│ │ └── kustomization.yaml +│ └── root-kustomization.yaml +└── resources + ├── fluxcd + │ ├── components.yaml + │ ├── flux-kustomization.yaml + │ ├── gitrepo.yaml + │ └── kustomization.yaml + ├── kustomization.yaml + ├── openmcp + │ ├── cluster-providers + │ │ └── kind.yaml + │ ├── crds + │ │ ├── clusters.openmcp.cloud_accessrequests.yaml + │ │ ├── clusters.openmcp.cloud_clusterprofiles.yaml + │ │ ├── clusters.openmcp.cloud_clusterrequests.yaml + │ │ ├── clusters.openmcp.cloud_clusters.yaml + │ │ ├── kind.clusters.openmcp.cloud_providerconfigs.yaml + │ │ ├── openmcp.cloud_clusterproviders.yaml + │ │ ├── openmcp.cloud_platformservices.yaml + │ │ └── openmcp.cloud_serviceproviders.yaml + │ ├── deployment.yaml + │ ├── kustomization.yaml + │ ├── namespace.yaml + │ └── rbac.yaml + └── root-kustomization.yaml +``` + +The `envs/` folder contains the Kustomization files that are used by FluxCD to deploy openMCP to the Kind cluster. +The `resources` folder contains the base resources that are used by the Kustomization files in the `envs/` folder. + +## Next Steps + +Continue to [Verify Setup](./03-verify-setup.md) to inspect the Kustomizations and deployed components. diff --git a/docs/operators/02-gardener-provider.md b/docs/operators/02-gardener-provider.md new file mode 100644 index 0000000..c3fd368 --- /dev/null +++ b/docs/operators/02-gardener-provider.md @@ -0,0 +1,600 @@ +--- +sidebar_position: 2 +--- + +import Tabs from '@theme/Tabs'; +import CodeBlock from '@theme/CodeBlock'; +import TabItem from '@theme/TabItem'; + +# Prod: Run on Gardener + +### Requirements + +* A running Gardener installation (see the [Gardener documentation](https://gardener.cloud/docs/) for more information on Gardener) +* A Gardener project in which the clusters will be created +* An infrastructure secret in the Gardener project (see the [Gardener documentation](https://gardener.cloud/docs/getting-started/project/#infrastructure-secrets) for more information on how to create an infrastructure secret) +* Kubectl (see the [Kubectl installation guide](https://kubernetes.io/docs/tasks/tools/#kubectl) for more information on how to install kubectl) +* If the Gardener installation is using OIDC for authentication, install the [OIDC kubectl plugin](https://github.com/int128/kubelogin) +* Good understanding of Gardener and how to create Gardener Shoot clusters and Service Accounts in Gardener Projects. + +### Create a configuration folder + +Create a directory that will be used to store the configuration files and the kubeconfig files. +To keep this example simple, we will use a single directory named `config` in the current working directory. + +```shell +mkdir config +``` + +All following examples will use the `config` directory as the configuration directory. If you use a different directory, replace all occurrences of `config` with your desired directory path. + +Create a directory named `kubeconfigs` in the configuration folder to store the kubeconfig files of the created clusters. + +```shell +mkdir kubeconfigs +``` + +### Create a Gardener Shoot for the Platform Cluster + +openMCP requires a running Kubernetes cluster that acts as the platform cluster. +The platform cluster hosts the openmcp-operator and all service providers, cluster providers and platform services. +In this example, we will create a Gardener Shoot cluster that acts as the platform cluster. See the [Gardener documentation](https://gardener.cloud/docs/getting-started/shoots/) for more information on how to create a Gardener Shoot cluster. + +Create a script folder named `scripts`: + +```shell +mkdir scripts +``` + +Create a file named `get-shoot-kubeconfig.sh` in the `scripts` folder with the following content: + +```shell title="scripts/get-shoot-kubeconfig.sh" +#!/usr/bin/env bash + +GARDENER_SECRET=$1 +NAMESPACE="garden-$2" +SHOOT_NAME=$3 + +REQUEST_PATH="$(mktemp -d)" +REQUEST="${REQUEST_PATH}/admin-kubeconfig-request.json" + +echo "{ \"apiVersion\": \"authentication.gardener.cloud/v1alpha1\", \"kind\": \"AdminKubeconfigRequest\", \"spec\": { \"expirationSeconds\": 7776000 } }" > ${REQUEST} 2>/dev/null + +KUBECONFIG=$(kubectl --kubeconfig "${GARDENER_SECRET}" create \ + -f ${REQUEST} \ + --raw /apis/core.gardener.cloud/v1beta1/namespaces/${NAMESPACE}/shoots/${SHOOT_NAME}/adminkubeconfig 2>/dev/null | jq -r ".status.kubeconfig" | base64 -d) + + +echo "${KUBECONFIG}" +``` + +Make the script executable: + +```shell +chmod +x ./scripts/get-shoot-kubeconfig.sh +``` + +In order to execute this script, you need a kubeconfig file that has access to the Gardener installation. This can be aquired by navigating to the Gardener dashboard, then selecting your user (icon in the upper right corner) -> click 'My Account' and under `Access` download the Kubeconfig file. + +Alternatively, you can create a service account with the `Admin` role in the Gardener project and then retrieve the kubeconfig for the service account. See the [Gardener documentation](https://gardener.cloud/docs/getting-started/project/#service-accounts) for more information on how to create a service account. + +Now, create a new Gardener Shoot cluster in your Gardener project using the Gardener dashboard or the Gardener API via kubectl. The name of the Shoot cluster shall be `platform`. +Please consult the [Gardener documentation](https://gardener.cloud/docs/getting-started/shoots/) for more information on how to create a Gardener Shoot cluster. + +Download the admin kubeconfig of the `platform` Shoot cluster using the script created above (`get-shoot-kubeconfig.sh`) and save it to a file named `platform.kubeconfig` in the `kubeconfigs` folder. + +```shell +./scripts/get-shoot-kubeconfig.sh platform > ./kubeconfigs/platform.kubeconfig +``` + +### Create a bootstrapping configuration file (bootstrapper-config.yaml) in the configuration folder + +Replace `` and `` with your Git organization and repository name. +The environment can be set to the logical environment name (e.g. `dev`, `prod`, `live-eu-west`) that will be used in the Git repository to separate different environments. +The branch can be set to the desired branch name in the Git repository that will be used to store the desired state of the openMCP landscape. + +Get the latest version of the `github.com/openmcp/openmcp` root component: + +```shell +TAG=$(curl -s "https://api.github.com/repos/openmcp-project/openmcp/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) +echo "${TAG}" +``` + +In the bootstrapper configuration, replace `` with the latest version of the `github.com/openmcp-project/openmcp` root component: + +```yaml title="config/bootstrapper-config.yaml" +component: + location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: + +repository: + url: https://github.com// + pushBranch: + +environment: + +openmcpOperator: + config: {} +``` + +### Create a Git configuration file (git-config.yaml) in the configuration folder + +For GitHub use a personal access token with `repo` write permissions. +It is also possible to use a fine-grained token. In this case, it requires read and write permissions for `Contents`. + +```yaml title="config/git-config.yaml" +auth: + basic: + username: "" + password: "" +``` + +### Run the `openmcp-bootstrapper` CLI tool to deploy FluxCD to the Platform Cluster + +Run the `openmcp-bootstrapper` CLI tool to deploy FluxCD to the `platform` Gardener Shoot cluster: + +```shell +docker run --rm -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} deploy-flux --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform.kubeconfig /config/bootstrapper-config.yaml +``` + +You should see output similar to the following: + +```shell +Info: Starting deployment of Flux controllers with config file: /config/bootstrapper-config.yaml. +Info: Ensure namespace flux-system exists +Info: Creating/updating git credentials secret flux-system/git +Info: Created/updated git credentials secret flux-system/git +Info: Creating working directory for gitops-templates +Info: Downloading templates +/tmp/openmcp.cloud.bootstrapper-3041773446/download: 9 file(s) with 691073 byte(s) written +Info: Arranging template files +Info: Arranged template files +Info: Applying templates from gitops-templates/fluxcd to deployment repository +Info: Kustomizing files in directory: /tmp/openmcp.cloud.bootstrapper-3041773446/repo/envs/dev/fluxcd +Info: Applying flux deployment objects +Info: Deployment of flux controllers completed +``` + +### Inspect the deployed FluxCD controllers and Kustomization + +Load the kubeconfig of the Kind cluster and check the deployed FluxCD controllers and the created GitRepository and Kustomization. + +```shell +kind get kubeconfig --name platform > ./kubeconfigs/platform.kubeconfig +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n flux-system +``` + +You should see output similar to the following: + +```shell +NAME READY STATUS RESTARTS AGE +helm-controller-648cdbf8d8-8jhnf 1/1 Running 0 9m37s +image-automation-controller-56df4c78dc-qwmfm 1/1 Running 0 9m35s +image-reflector-controller-56f69fcdc9-pgcgx 1/1 Running 0 9m35s +kustomize-controller-b4c4dcdc8-g49gc 1/1 Running 0 9m38s +notification-controller-59d754d599-w7fjp 1/1 Running 0 9m36s +source-controller-6b45b6464f-jbgb6 1/1 Running 0 9m38s +``` + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A +```` + +You should see output similar to the following: + +```shell +NAMESPACE NAME URL AGE READY STATUS +flux-system environments https://github.com// 86s False failed to checkout and determine revision: unable to clone 'https://github.com//': couldn't find remote ref "refs/heads/" +``` + +This error is expected as the branch does not exist yet in the Git repository. The `openmcp-bootstrapper` will create the branch in the next step. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A +``` + +You should see output similar to the following: + +```shell +NAMESPACE NAME AGE READY STATUS +flux-system flux-system 3m15s False Source artifact not found, retrying in 30s +``` + +This error is also expected as the GitRepository does not exist yet. The `openmcp-bootstrapper` will create the GitRepository in the next step. + +### Run the `openmcp-bootstrapper` CLI tool to deploy openMCP to the Kind cluster + +Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the Gardener cluster provider and the openmcp-operator configuration. + +Please replace `` with the logical environment name (e.g. `dev`, `prod`, `live-eu-west`) that will be used in the Git repository to separate different environments. Notice that the same environment name must be used in the `environment` field and in the scheduler profiles. + +```yaml title="config/bootstrapper-config.yaml" +component: + location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: + +repository: + url: https://github.com// + pushBranch: + +environment: + +providers: + clusterProviders: + - name: gardener + +openmcpOperator: + config: + managedControlPlane: + mcpClusterPurpose: mcp-worker + reconcileMCPEveryXDays: 7 + scheduler: + scope: Cluster + purposeMappings: + mcp-worker: + template: + metadata: + namespace: openmcp-system + spec: + profile: .gardener.shoot-small + tenancy: Exclusive + platform: + template: + metadata: + namespace: openmcp-system + labels: + clusters.openmcp.cloud/delete-without-requests: "false" + spec: + profile: .gardener.shoot-small + tenancy: Shared + onboarding: + template: + metadata: + namespace: openmcp-system + labels: + clusters.openmcp.cloud/delete-without-requests: "false" + spec: + profile: .gardener.shoot-workerless + tenancy: Shared + workload: + tenancyCount: 20 + template: + metadata: + namespace: openmcp-system + spec: + profile: .gardener.shoot-small + tenancy: Shared +``` + +Create a directory named `extra-manifests` in the configuration folder. + +```shell +mkdir ./config/extra-manifests +``` + +In the `extra-manifests` folder, create a file named `gardener-landscape.yaml` with the following content: + +```yaml title="config/extra-manifests/gardener-landscape.yaml" +apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 +kind: Landscape +metadata: + name: gardener-landscape +spec: + access: + secretRef: + name: gardener-landscape-kubeconfig + namespace: openmcp-system +``` + +The gardener landscape configuration requires a secret that contains the kubeconfig to access the Gardener project. For that purpose, create a secret named `gardener-landscape-kubeconfig` in the `openmcp-system` namespace of the platform cluster that contains the kubeconfig file that has access to the Gardener installation. +See the [Gardener documentation](https://gardener.cloud/docs/dashboard/automated-resource-management/#create-a-service-account) on how to create a service account in the Gardener project using the Gardener dashboard. +Create a service account with at least the `admin` role in the Gardener project. Then [download](https://gardener.cloud/docs/dashboard/automated-resource-management/#use-the-service-account) the kubeconfig for the service account and save it to a file named `./kubeconfigs/gardener-landscape.kubeconfig`. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig create namespace openmcp-system +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig create secret generic gardener-landscape-kubeconfig --from-file=kubeconfig=./kubeconfigs/gardener-landscape.kubeconfig -n openmcp-system +``` + +The kubeconfig content can be retrieved from the Gardener dashboard or by creating a service account in the Gardener project. See the [Gardener documentation](https://gardener.cloud/docs/getting-started/project/#service-accounts) for more information on how to create a service account. +The service account requires at least the `admin` role in the Gardener project. + +In the `extra-manifests` folder, create a file named `gardener-cluster-provider-shoot-small.yaml` with the following content: + + + + +```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-small.yaml" +apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 +kind: ProviderConfig +metadata: + name: shoot-small +spec: + landscapeRef: + name: gardener-landscape + project: + providerRef: + name: gardener + shootTemplate: + spec: + cloudProfile: + kind: CloudProfile + name: gcp + kubernetes: + version: "" # e.g. "1.32" + maintenance: + autoUpdate: + kubernetesVersion: true + timeWindow: + begin: 220000+0200 + end: 230000+0200 + networking: + nodes: 10.180.0.0/16 + type: calico + provider: + controlPlaneConfig: + apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1 + kind: ControlPlaneConfig + zone: # e.g. europe-west1-c + infrastructureConfig: + apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1 + kind: InfrastructureConfig + networks: + workers: 10.180.0.0/16 + type: gcp + workers: + - cri: + name: containerd + machine: + architecture: amd64 + image: + name: gardenlinux + version: "" # e.g. "1592.9.0" + type: n1-standard-2 + maxSurge: 1 + maximum: 5 + minimum: 1 + name: default-worker + volume: + size: 50Gi + type: pd-balanced + zones: + - # e.g. europe-west1-c + purpose: evaluation + region: # e.g. europe-west1 + secretBindingName: +``` + + + +```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-small.yaml" +apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 +kind: ProviderConfig +metadata: + name: shoot-small +spec: + landscapeRef: + name: gardener-landscape + project: + providerRef: + name: gardener + shootTemplate: + spec: + cloudProfile: + kind: CloudProfile + name: aws + kubernetes: + version: "" # e.g. "1.32" + maintenance: + autoUpdate: + kubernetesVersion: true + timeWindow: + begin: 220000+0200 + end: 230000+0200 + networking: + type: calico + nodes: 10.180.0.0/16 + provider: + controlPlaneConfig: + apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1 + kind: ControlPlaneConfig + cloudControllerManager: + useCustomRouteController: true + storage: + managedDefaultClass: true + infrastructureConfig: + apiVersion: aws.provider.extensions.gardener.cloud/v1alpha1 + kind: InfrastructureConfig + networks: + vpc: + cidr: 10.180.0.0/16 + zones: + - name: # e.g. eu-west-1a + workers: 10.180.0.0/19 + public: 10.180.32.0/20 + internal: 10.180.48.0/20 + type: aws + workers: + - cri: + name: containerd + machine: + architecture: amd64 + image: + name: gardenlinux + version: "" # e.g. "1592.9.0" + type: m5.large + maxSurge: 1 + maximum: 5 + minimum: 1 + name: default-worker + volume: + size: 50Gi + type: gp3 + zones: + - # e.g. eu-west-1a + purpose: evaluation + region: # e.g. eu-west-1 + secretBindingName: +``` + + + + +In the `extra-manifests` folder, create a file named `gardener-cluster-provider-shoot-workerless.yaml` with the following content: + + + + +```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-workerless.yaml" +apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 +kind: ProviderConfig +metadata: + name: shoot-workerless +spec: + landscapeRef: + name: gardener-landscape + project: + providerRef: + name: gardener + shootTemplate: + spec: + cloudProfile: + kind: CloudProfile + name: gcp + kubernetes: + version: "" # e.g. "1.32" + maintenance: + autoUpdate: + kubernetesVersion: true + timeWindow: + begin: 220000+0200 + end: 230000+0200 + provider: + type: gcp + purpose: evaluation + region: # eg europe-west1 +``` + + + +```yaml title="config/extra-manifests/gardener-cluster-provider-shoot-workerless.yaml" +apiVersion: gardener.clusters.openmcp.cloud/v1alpha1 +kind: ProviderConfig +metadata: + name: shoot-workerless +spec: + landscapeRef: + name: gardener-landscape + project: + providerRef: + name: gardener + shootTemplate: + spec: + cloudProfile: + kind: CloudProfile + name: aws + kubernetes: + version: "" # e.g. "1.32" + maintenance: + autoUpdate: + kubernetesVersion: true + timeWindow: + begin: 220000+0200 + end: 230000+0200 + provider: + type: aws + purpose: evaluation + region: # e.g. eu-west-1 +``` + + + + +Replace `` with the name of your Gardener project and `` with the name of the secret binding that contains the infrastructure secret for your Gardener project. + +Replace also `` with the desired Kubernetes version (e.g. `1.32`), `` with the desired Garden Linux version (e.g. `1592.9.0`), `` with the desired region (e.g. `europe-west1`), and `` with the desired zone (e.g. `europe-west1-c`). + +:::info +Please adjust the shoot configuration based on your specific needs, e.g. change `Evaluation` to `Production` as purpose, if you are planning to use the MCP for productive purposes. For all the details reg. Shoot configuration, please consult the respective Gardener documentation. +::: + +Now run the `openmcp-bootstrapper` CLI tool to update the Git repository and deploy openMCP to the `platform` Gardener Shoot cluster: + +```shell +docker run --rm -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform.kubeconfig --extra-manifest-dir /config/extra-manifests /config/bootstrapper-config.yaml +``` + +You should see output similar to the following: + +```shell +Info: Downloading component ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp:v0.0.25 +Info: Creating template transformer +Info: Downloading template resources +/tmp/openmcp.cloud.bootstrapper-245193548/transformer/download/fluxcd: 9 file(s) with 691073 byte(s) written +/tmp/openmcp.cloud.bootstrapper-245193548/transformer/download/openmcp: 8 file(s) with 6625 byte(s) written +Info: Transforming templates into deployment repository structure +Info: Fetching openmcp-operator component version +Info: Cloning deployment repository https://github.com/reshnm/openmcp-deployment +Info: Checking out or creating branch gardener +Info: Applying templates from "gitops-templates/fluxcd"/"gitops-templates/openmcp" to deployment repository +Info: Templating providers: clusterProviders=[{gardener [] map[]}], serviceProviders=[], platformServices=[], imagePullSecrets=[] +Info: Applying Custom Resource Definitions to deployment repository +/tmp/openmcp.cloud.bootstrapper-245193548/repo/resources/openmcp/crds: 8 file(s) with 484832 byte(s) written +/tmp/openmcp.cloud.bootstrapper-245193548/repo/resources/openmcp/crds: 3 file(s) with 198428 byte(s) written +Info: Applying extra manifests from /config/extra-manifests to deployment repository +Info: Committing and pushing changes to deployment repository +Info: Created commit: ee2b6ef079808fbc198b4f6eced1afb89f64d1d1 +Info: Running kustomize on /tmp/openmcp.cloud.bootstrapper-245193548/repo/envs/dev +Info: Applying Kustomization manifest: default/bootstrap +``` + +### Inspect the Git repository + +The desired state of the openMCP landscape has now been created in the Git repository and should look similar to the following structure: + +```shell +. +├── envs +│   └── dev +│   ├── fluxcd +│   │   ├── flux-kustomization.yaml +│   │   ├── gitrepo.yaml +│   │   └── kustomization.yaml +│   ├── kustomization.yaml +│   ├── openmcp +│   │   ├── config +│   │   │   └── openmcp-operator-config.yaml +│   │   └── kustomization.yaml +│   └── root-kustomization.yaml +└── resources + ├── fluxcd + │   ├── components.yaml + │   ├── flux-kustomization.yaml + │   ├── gitrepo.yaml + │   └── kustomization.yaml + ├── kustomization.yaml + ├── openmcp + │   ├── cluster-providers + │   │   └── gardener.yaml + │   ├── crds + │   │   ├── clusters.openmcp.cloud_accessrequests.yaml + │   │   ├── clusters.openmcp.cloud_clusterprofiles.yaml + │   │   ├── clusters.openmcp.cloud_clusterrequests.yaml + │   │   ├── clusters.openmcp.cloud_clusters.yaml + │   │   ├── gardener.clusters.openmcp.cloud_clusterconfigs.yaml + │   │   ├── gardener.clusters.openmcp.cloud_landscapes.yaml + │   │   ├── gardener.clusters.openmcp.cloud_providerconfigs.yaml + │   │   ├── openmcp.cloud_clusterproviders.yaml + │   │   ├── openmcp.cloud_platformservices.yaml + │   │   └── openmcp.cloud_serviceproviders.yaml + │   ├── deployment.yaml + │   ├── extra + │   │   ├── gardener-cluster-provider-shoot-small.yaml + │   │   ├── gardener-cluster-provider-shoot-workerless.yaml + │   │   └── gardener-landscape.yaml + │   ├── kustomization.yaml + │   ├── namespace.yaml + │   └── rbac.yaml + └── root-kustomization.yaml +``` + +The `envs/` folder contains the Kustomization files that are used by FluxCD to deploy openMCP to the platform cluster. +The `resources` folder contains the base resources that are used by the Kustomization files in the `envs/` folder. + diff --git a/docs/operators/03-verify-setup.md b/docs/operators/03-verify-setup.md new file mode 100644 index 0000000..d778295 --- /dev/null +++ b/docs/operators/03-verify-setup.md @@ -0,0 +1,521 @@ +--- +sidebar_position: 3 +--- + +import Tabs from '@theme/Tabs'; +import CodeBlock from '@theme/CodeBlock'; +import TabItem from '@theme/TabItem'; + +# Verify Setup + +After deploying OpenMCP using the bootstrapper, verify that all components are running correctly. + + + + +## Inspect the Kustomizations in the Kind cluster + +Force an update of the GitRepository and Kustomization in the Kind cluster to pick up the changes made in the Git repository. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system patch kustomization flux-system --type merge -p '{"spec":{"force":true}}' +``` + +Get the status of the GitRepository in the Kind cluster. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A +``` + +You should see output similar to the following: + +```shell +NAMESPACE NAME URL AGE READY STATUS +flux-system environments https://github.com// 9m6s True stored artifact for revision 'docs@sha1:...' +``` + +So we have now successfully configured FluxCD to watch for changes in the specified GitHub repository, using the `environments` custom resource of kind `GitRepository`. +Now let's get the status of the Kustomization in the Kind cluster. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A +``` + +You should see output similar to the following: + +```shell +NAMESPACE NAME AGE READY STATUS +default bootstrap 5m31s True Applied revision: docs@sha1:... +flux-system flux-system 10m True Applied revision: docs@sha1:... +``` + +You can see that there are now two Kustomizations in the Kind cluster. +The `flux-system` Kustomization is used to deploy the FluxCD controllers and the `bootstrap` Kustomization is used to deploy openMCP to the Kind cluster. + +## Inspect the deployed openMCP components in the Kind cluster + +Now check the deployed openMCP components. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system +``` + +You should see output similar to the following: + +```shell +NAME READY STATUS RESTARTS AGE +cp-kind-6b4886b7cf-z54pg 1/1 Running 0 20s +cp-kind-init-msqg7 0/1 Completed 0 27s +openmcp-operator-5f784f47d7-nfg65 1/1 Running 0 34s +ps-managedcontrolplane-668c99c97c-9jltx 1/1 Running 0 4s +ps-managedcontrolplane-init-49rx2 0/1 Completed 0 27s +``` + +So now, the openmcp-operator, the managedcontrolplane platform service and the cluster provider kind are running. +You are now ready to create and manage clusters using openMCP. + +## Get Access to the Onboarding Cluster + +The openmcp-operator should now have created a `onboarding Cluster` resource on the platform cluster that represents the onboarding cluster. +The onboarding cluster is a special cluster that is used to create new managed control planes. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusters.clusters.openmcp.cloud -A +``` + +You should see output similar to the following: + +```shell +NAMESPACE NAME PURPOSES PHASE VERSION PROVIDER AGE +openmcp-system onboarding ["onboarding"] Ready 11m +``` + +Now you can retrieve the kubeconfig of the onboarding cluster. +Use `kind` to retrieve the list of available clusters. + +```shell +kind get clusters +``` + +You should see output similar to the following: + +```shell +onboarding.12345678 +platform +``` + +You can now see the new onboarding cluster. +Get the kubeconfig of the onboarding cluster and save it to a file named `onboarding.kubeconfig` in the configuration folder. +Please replace `onboarding.12345678` with the actual name of your onboarding cluster. + +```shell +kind get kubeconfig --name onboarding.12345678 > ./kubeconfigs/onboarding.kubeconfig +``` + +## Create a Managed Control Plane + +Create a file named `my-mcp.yaml` with the following content in the configuration folder: + +```yaml title="config/my-mcp.yaml" +apiVersion: core.openmcp.cloud/v2alpha1 +kind: ManagedControlPlaneV2 +metadata: + name: my-mcp + namespace: default +spec: + iam: {} +``` + +Apply the file to the onboarding cluster: + +```shell +kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig apply -f ./config/my-mcp.yaml +``` + +The openmcp-operator should start to create the necessary resources in order to create the managed control plane. As a result, a new `Managed Control Plane` should be available soon. +You can check the status of the Managed Control Plane using the following command: + +```shell +kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get managedcontrolplanev2 -n default my-mcp -o yaml +``` + +You should see output similar to the following: + +```yaml +apiVersion: core.openmcp.cloud/v2alpha1 +kind: ManagedControlPlaneV2 +metadata: + finalizers: + - core.openmcp.cloud/mcp + - request.clusters.openmcp.cloud/sample + name: sample + namespace: default +spec: + iam: {} +status: + conditions: + - lastTransitionTime: "2025-09-16T13:03:55Z" + message: All accesses are ready + observedGeneration: 1 + reason: AllAccessReady_True + status: "True" + type: AllAccessReady + - lastTransitionTime: "2025-09-16T13:03:55Z" + message: Cluster conditions have been synced to MCP + observedGeneration: 1 + reason: ClusterConditionsSynced_True + status: "True" + type: ClusterConditionsSynced + - lastTransitionTime: "2025-09-16T13:03:55Z" + message: ClusterRequest is ready + observedGeneration: 1 + reason: ClusterRequestReady_True + status: "True" + type: ClusterRequestReady + - lastTransitionTime: "2025-09-16T13:03:50Z" + message: "" + observedGeneration: 1 + reason: Meta_True + status: "True" + type: Meta + observedGeneration: 1 + phase: Ready +``` + +You should see that the Managed Control Plane is in phase `Ready`. +The openmcp-operator should now have created a new Kind cluster that represents the Managed Control Plane. +You can check the list of available Kind clusters using the following command: + +```shell +kind get clusters +``` + +You should see output similar to the following: + +```shell +mcp-worker-abcde.87654321 +onboarding.12345678 +platform +``` + +You can now get the kubeconfig of the managed control plane and save it to a file named `my-mcp.kubeconfig` in the kubeconfigs folder. Please replace `mcp-worker-abcde.87654321` with the actual name of your managed control plane cluster. + +```shell +kind get kubeconfig --name mcp-worker-abcde.87654321 > ./kubeconfigs/my-mcp.kubeconfig +``` + +You can now use the kubeconfig to access the Managed Control Plane cluster. + +```shell +kubectl --kubeconfig ./kubeconfigs/my-mcp.kubeconfig get namespaces +``` + +## Deploy the Crossplane Service Provider + +Update the bootstrapping configuration file (bootstrapper-config.yaml) to include the crossplane service provider. + +```yaml title="config/bootstrapper-config.yaml" +component: + location: ghcr.io/openmcp-project/components//github.com/openmcp-project/openmcp: + +repository: + url: https://github.com// + pushBranch: + +environment: + +providers: + clusterProviders: + - name: kind + config: + extraVolumeMounts: + - mountPath: /var/run/docker.sock + name: docker + extraVolumes: + - name: docker + hostPath: + path: /var/run/host-docker.sock + type: Socket + serviceProviders: + - name: crossplane + +openmcpOperator: + config: + managedControlPlane: + mcpClusterPurpose: mcp-worker + reconcileMCPEveryXDays: 7 + scheduler: + scope: Cluster + purposeMappings: + mcp: + template: + spec: + profile: kind + tenancy: Exclusive + mcp-worker: + template: + spec: + profile: kind + tenancy: Exclusive + platform: + template: + metadata: + labels: + clusters.openmcp.cloud/delete-without-requests: "false" + spec: + profile: kind + tenancy: Shared + onboarding: + template: + metadata: + labels: + clusters.openmcp.cloud/delete-without-requests: "false" + spec: + profile: kind + tenancy: Shared + workload: + tenancyCount: 20 + template: + spec: + profile: kind + tenancy: Shared +``` + +Create a new folder named `extra-manifests` in the configuration folder. Then create a file named `crossplane-provider.yaml` with the following content, and save it in the new `extra-manifests` folder. + +:::info +Note that service provider crossplane only supports the installation of crossplane from an OCI registry. Replace the chart locations in the `ProviderConfig` with the OCI registry where you mirror your crossplane chart versions. OpenMCP will provide this as part of an open source [Releasechannel](https://github.com/openmcp-project/backlog/issues/323) in an upcoming update. +::: + +```yaml title="config/extra-manifests/crossplane-provider.yaml" +apiVersion: crossplane.services.openmcp.cloud/v1alpha1 +kind: ProviderConfig +metadata: + name: default +spec: + versions: + - version: v2.0.2 + chart: + url: ghcr.io/openmcp-project/charts/crossplane:2.0.2 + image: + url: xpkg.crossplane.io/crossplane/crossplane:v2.0.2 + - version: v1.20.1 + chart: + url: ghcr.io/openmcp-project/charts/crossplane:1.20.1 + image: + url: xpkg.crossplane.io/crossplane/crossplane:v1.20.1 + providers: + availableProviders: + - name: provider-kubernetes + package: xpkg.upbound.io/upbound/provider-kubernetes + versions: + - v0.16.0 +``` + +Run the `openmcp-bootstrapper` CLI tool to update the Git repository and deploy the crossplane service provider to the Kind cluster. + +```shell +docker run --rm --network kind -v ./config:/config -v ./kubeconfigs:/kubeconfigs ghcr.io/openmcp-project/images/openmcp-bootstrapper:${OPENMCP_BOOTSTRAPPER_VERSION} manage-deployment-repo --git-config /config/git-config.yaml --kubeconfig /kubeconfigs/platform-int.kubeconfig --extra-manifest-dir /config/extra-manifests /config/bootstrapper-config.yaml +``` + +See the `--extra-manifest-dir` parameter that points to the folder containing the extra manifest file created in the previous step. All manifest files in this folder will be added to the Kustomization used by FluxCD to deploy openMCP to the Kind cluster. + +The git repository should now be updated: + +```shell +. +├── envs +│ └── dev +│ ├── fluxcd +│ │ ├── flux-kustomization.yaml +│ │ ├── gitrepo.yaml +│ │ └── kustomization.yaml +│ ├── kustomization.yaml +│ ├── openmcp +│ │ ├── config +│ │ │ └── openmcp-operator-config.yaml +│ │ └── kustomization.yaml +│ └── root-kustomization.yaml +└── resources + ├── fluxcd + │ ├── components.yaml + │ ├── flux-kustomization.yaml + │ ├── gitrepo.yaml + │ └── kustomization.yaml + ├── kustomization.yaml + ├── openmcp + │ ├── cluster-providers + │ │ └── kind.yaml + │ ├── crds + │ │ ├── clusters.openmcp.cloud_accessrequests.yaml + │ │ ├── clusters.openmcp.cloud_clusterprofiles.yaml + │ │ ├── clusters.openmcp.cloud_clusterrequests.yaml + │ │ ├── clusters.openmcp.cloud_clusters.yaml + │ │ ├── crossplane.services.openmcp.cloud_providerconfigs.yaml + │ │ ├── kind.clusters.openmcp.cloud_providerconfigs.yaml + │ │ ├── openmcp.cloud_clusterproviders.yaml + │ │ ├── openmcp.cloud_platformservices.yaml + │ │ └── openmcp.cloud_serviceproviders.yaml + │ ├── deployment.yaml + │ ├── extra + │ │ └── crossplane-providers.yaml + │ ├── kustomization.yaml + │ ├── namespace.yaml + │ ├── rbac.yaml + │ └── service-providers + │ └── crossplane.yaml + └── root-kustomization.yaml +``` + +After a while, the Kustomization in the Kind cluster should be updated and the crossplane service provider should be deployed: +You can force an update of the Kustomization in the Kind cluster to pick up the changes made in the Git repository. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n default patch kustomization bootstrap --type merge -p '{"spec":{"force":true}}' +``` + +List the pods in the `openmcp-system` namespace again: + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system +```` + +You should see output similar to the following: + +```shell +NAME READY STATUS RESTARTS AGE +cp-kind-6b4886b7cf-z54pg 1/1 Running 0 18m +cp-kind-init-msqg7 0/1 Completed 0 18m +openmcp-operator-5f784f47d7-nfg65 1/1 Running 0 18m +ps-managedcontrolplane-668c99c97c-9jltx 1/1 Running 0 18m +ps-managedcontrolplane-init-49rx2 0/1 Completed 0 18m +sp-crossplane-6b8cccc775-9hx98 1/1 Running 0 105s +sp-crossplane-init-6hvf4 0/1 Completed 0 2m11s +``` + +You should see that the crossplane service provider is running. This means that from now on, the openMCP is able to provide Crossplane service instances, using the new service provider Crossplane. + +## Create a Crossplane service instance on the onboarding cluster + +Create a file named `crossplane-instance.yaml` with the following content in the configuration folder: + +```yaml title="config/crossplane-instance.yaml" +apiVersion: crossplane.services.openmcp.cloud/v1alpha1 +kind: Crossplane +metadata: + name: my-mcp + namespace: default +spec: + version: v1.20.0 + providers: + - name: provider-kubernetes + version: v0.16.0 +``` + +Apply the file to onboarding cluster: + +```shell +kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig apply -f ./config/crossplane-instance.yaml +``` + +The Crossplane service provider should now start to create the necessary resources for the new Crossplane instance. As a result, a new Crossplane service instance should soon be available. +You can check the status of the Crossplane instance using the following command: + +```shell +kubectl --kubeconfig ./kubeconfigs/onboarding.kubeconfig get crossplane -n default my-mcp -o yaml +``` + +After a while, you should see output similar to the following: + +```yaml +apiVersion: crossplane.services.openmcp.cloud/v1alpha1 +kind: Crossplane +metadata: + finalizers: + - openmcp.cloud/finalizers + generation: 1 + name: sample + namespace: default +spec: + providers: + - name: provider-kubernetes + version: v0.16.0 + version: v1.20.0 +status: + conditions: + - lastTransitionTime: "2025-09-16T14:09:56Z" + message: Crossplane is healthy. + reason: Healthy + status: "True" + type: CrossplaneReady + - lastTransitionTime: "2025-09-16T14:10:01Z" + message: ProviderKubernetes is healthy. + reason: Healthy + status: "True" + type: ProviderKubernetesReady + observedGeneration: 0 + phase: "" +``` + +Crossplane and the provider Kubernetes should now be available on the mcp cluster. + +```shell +kubectl --kubeconfig ./kubeconfigs/my-mcp.kubeconfig api-resources | grep 'crossplane\|kubernetes' +``` + + + + +## Inspect the Kustomizations in the platform cluster + +After running the bootstrapper for Gardener, verify the deployment status. + +Force an update of the GitRepository and Kustomization in the platform cluster to pick up the changes made in the Git repository. + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system annotate gitrepository environments reconcile.fluxcd.io/requestedAt="$(date +%s)" +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig -n flux-system patch kustomization flux-system --type merge -p '{"spec":{"force":true}}' +``` + +Get the status of the GitRepository: + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get gitrepositories.source.toolkit.fluxcd.io -A +``` + +Get the status of the Kustomizations: + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get kustomizations.kustomize.toolkit.fluxcd.io -A +``` + +You should see the `flux-system` and `bootstrap` Kustomizations in Ready state. + +## Inspect the deployed openMCP components + +Check the deployed openMCP components: + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get pods -n openmcp-system +``` + +You should see the openmcp-operator, managedcontrolplane platform service, and the gardener cluster provider running. + +## Get Access to the Onboarding Cluster + +Check that the onboarding cluster has been created: + +```shell +kubectl --kubeconfig ./kubeconfigs/platform.kubeconfig get clusters.clusters.openmcp.cloud -A +``` + +For Gardener, retrieve the onboarding cluster kubeconfig using the Gardener API or dashboard, then save it to `./kubeconfigs/onboarding.kubeconfig`. + +## Create a Managed Control Plane + +Follow the same steps as the Kind provider to create a managed control plane on the onboarding cluster. + + + diff --git a/docs/users/00-getting-started.md b/docs/users/00-getting-started.md index ef3b09d..6dabb06 100644 --- a/docs/users/00-getting-started.md +++ b/docs/users/00-getting-started.md @@ -1,126 +1,3 @@ -# Getting Started +# Welcome -> This is a quick guide on how to get started with the openMCP platform. This guide is not complete and will be extended in the future. - -## Setup - -### 1. Create a `Project` -A `Project` is the starting point of your Manged Control Plane (MCP) journey. It is a logical grouping of `Workspaces` and `ManagedControlPlanes`. A `Project` can be used to represent an organization, department, team or any other logical grouping of resources. -```yaml -apiVersion: core.openmcp.cloud/v1alpha1 -kind: Project -metadata: - name: platform-team - annotations: - openmcp.cloud/display-name: Platform Team -spec: - members: - - kind: User - name: first.user@example.com - roles: - - admin - - kind: User - name: second.user@example.com - roles: - - view -``` - -### 2. Create a `Workspace` in the `Project` - -A `Workspace` is a logical grouping of `ManagedControlPlanes`. A `Workspace` can be used to represent an environment (e.g. dev, staging, prod) or again an organization, department, team or any other logical grouping of resources. - -```yaml -apiVersion: core.openmcp.cloud/v1alpha1 -kind: Workspace -metadata: - name: dev - namespace: project-platform-team - annotations: - openmcp.cloud/display-name: Platform Team - Dev -spec: - members: - - kind: User - name: first.user@example.com - roles: - - admin - - kind: User - name: second.user@example.com - roles: - - view -``` - -### 3. Create a `ManagedControlPlane` in the `Workspace` - -The `ManagedControlPlane` resource is the heart of the openMCP platform. Each Managed Control Plane (MCP) has its own Kubernetes API endpoint and data store. You can use the `iam` property to define who should have access to the MCP and the resources it contains. - -```yaml -apiVersion: core.openmcp.cloud/v2alpha1 -kind: ManagedControlPlaneV2 -metadata: - name: mcp-01 - namespace: project-platform-team--ws-dev -spec: - iam: - oidc: # for human authentication - defaultProvider: - roleBindings: # authorization for human users - - roleRefs: - - kind: ClusterRole - name: cluster-admin - subjects: - - kind: User - name: first.user@example.com - - kind: User - name: second.user@example.com - tokens: # for machine authentication - - name: xyz-service-token - roleRefs: # authorization for machine users - - kind: ClusterRole - name: cluster-admin -``` - -Under `spec.iam` you can define the authentication for your ManagedControlPlane. You can use OIDC-based authentication for human users and token-based authentication for machine users. -For authorization, ClusterRoleBindings will map the specified roles to the defined subjects. For token-based authentication, the specified roles will get bound to a generated ServiceAccount on the ManagedControlPlane. - -In `status.access` you will find the references to the secrets at the Onboarding API that contain the kubeconfig to access your MCP for the OIDC and/or token-based authentication methods. - -### 4. Install managed services in your Managed Control Plane (MCP) - -You can install managed services in your Managed Control Plane (MCP) to extend its functionality. Currently, the following managed services are available: -- Crossplane via the [service-provider-crossplane](https://github.com/openmcp-project/service-provider-crossplane) -- Landscaper via the [service-provider-landscaper](https://github.com/openmcp-project/service-provider-landscaper) - -#### Managed Service: Crossplane - -Crossplane is an open source project that enables you to manage cloud infrastructure and services using Kubernetes-style declarative configuration. It allows you to define and manage cloud resources such as databases, storage, and networking using Kubernetes manifests. - -To install Crossplane in your MCP, you need to create a `Crossplane` resource in the same namespace as your `ManagedControlPlane`. The following example installs Crossplane version `v1.20.0` with the `provider-kubernetes` provider version `v0.16.0`. - -```yaml -apiVersion: crossplane.services.openmcp.cloud/v1alpha1 -kind: Crossplane -metadata: - name: mcp-01 # Same name as your ManagedControlPlane - namespace: project-platform-team--ws-dev # Same namespace as your ManagedControlPlane -spec: - version: v1.20.0 - providers: - - name: provider-kubernetes - version: v0.16.0 -``` - -#### Managed Service: Landscaper - -Landscaper manages the installation, updates, and uninstallation of cloud-native workloads, with focus on larger complexities, while being capable of handling complex dependency chains between the individual components. - -To install a Landscaper for your MCP, you need to create a `Landscaper` resource with the same namespace and name as your `ManagedControlPlane`. The following example installs the Landscaper with default configuration. - -```yaml -apiVersion: landscaper.services.openmcp.cloud/v1alpha2 -kind: Landscaper -metadata: - name: mcp-01 # Same name as your ManagedControlPlane - namespace: project-platform-team--ws-dev # Same namespace as your ManagedControlPlane -spec: - version: v0.142.0 -``` +TODO \ No newline at end of file diff --git a/docs/about/concepts/_category_.yml b/docs/users/concepts/_category_.yml similarity index 100% rename from docs/about/concepts/_category_.yml rename to docs/users/concepts/_category_.yml diff --git a/docs/users/concepts/cluster-provider.md b/docs/users/concepts/cluster-provider.md new file mode 100644 index 0000000..a4c33cf --- /dev/null +++ b/docs/users/concepts/cluster-provider.md @@ -0,0 +1,3 @@ +# Cluster Providers + +Cluster providers are responsible for the dynamic creation, modification, and deletion of Kubernetes clusters in an OpenControlPlane environment. They conceal certain cluster technologies (e.g., [Gardener](https://gardener.cloud/) and [Kubernetes-in-Docker](https://kind.sigs.k8s.io/)) behind a homogeneous interface. This allows operators to install an OpenControlPlane system in different environments and on various infrastructure providers without having to adjust the other components of the system accordingly. diff --git a/docs/users/concepts/managed-control-plane.md b/docs/users/concepts/managed-control-plane.md new file mode 100644 index 0000000..3a98e1e --- /dev/null +++ b/docs/users/concepts/managed-control-plane.md @@ -0,0 +1,3 @@ +# Managed Control Planes (MCPs) + +Managed Control Planes (MCPs) are at the heart of OpenControlPlane. Simply put, they are lightweight Kubernetes clusters that store the desired state and current status of various resources. All resources follow the Kubernetes Resource Model (KRM), allowing infrastructure resources, deployments, etc., to be managed with common Kubernetes tools like kubectl, kustomize, Helm, Flux, ArgoCD, and so on. diff --git a/docs/users/concepts/platform-service.md b/docs/users/concepts/platform-service.md new file mode 100644 index 0000000..bcff4e6 --- /dev/null +++ b/docs/users/concepts/platform-service.md @@ -0,0 +1,3 @@ +# Platform Services + +Platform services add functionality to an OpenControlPlane environment (not MCPs). Examples include network services (Gateway API, Ingress), audit logs, billing, grouping of MCPs, and system-wide policies. They are installed and configured by the platform operator and apply to the entire system. diff --git a/docs/users/concepts/service-provider.md b/docs/users/concepts/service-provider.md new file mode 100644 index 0000000..3c0966f --- /dev/null +++ b/docs/users/concepts/service-provider.md @@ -0,0 +1,3 @@ +# Service Providers + +Without service providers, MCPs are of little use. They add functionality such as cloud provider APIs, GitOps, policies, or backup and restore to MCPs. The operators of an OpenControlPlane environment decide which service providers are available to end users. The end users can then activate them for their MCPs. diff --git a/docs/about/design/_category_.yml b/docs/users/design/_category_.yml similarity index 100% rename from docs/about/design/_category_.yml rename to docs/users/design/_category_.yml diff --git a/docs/about/design/service-provider.md b/docs/users/design/service-provider.md similarity index 92% rename from docs/about/design/service-provider.md rename to docs/users/design/service-provider.md index 03109fc..3bd66a9 100644 --- a/docs/about/design/service-provider.md +++ b/docs/users/design/service-provider.md @@ -1,10 +1,10 @@ # Service Providers -This document outlines the `ServiceProvider` domain and its technical considerations within the context of the [openMCP project](https://github.com/openmcp-project/), providing a foundation for understanding its architecture and operational aspects. +This document outlines the `ServiceProvider` domain and its technical considerations within the context of the [OpenControlPlane project](https://github.com/openmcp-project/), providing a foundation for understanding its architecture and operational aspects. ## Goals -- Define clear terminology around `ServiceProvider` within the openMCP project +- Define clear terminology around `ServiceProvider` within the OpenControlPlane project - Establish the scope of a `ServiceProvider`, including its responsibilities and boundaries - Define a `ServiceProvider` implementation layer to implement common features and ensure consistency across `ServiceProvider` instances - Outline how a `ServiceProvider` can be validated @@ -16,14 +16,14 @@ This document outlines the `ServiceProvider` domain and its technical considerat ## Terminology -- `End Users`: These are the consumers of services provided by an openMCP platform installation. They operate on the `OnboardingCluster` and `MCPCluster` (see [deployment model](#deployment-model)). -- `Platform Operators`: These are either human users or technical systems that are responsible for managing an openMCP platform installation. While they may operate on any cluster, their primary focus is on the `PlatformCluster` and `WorkloadCluster`. +- `End Users`: These are the consumers of services provided by an OpenControlPlane platform installation. They operate on the `OnboardingCluster` and `MCPCluster` (see [deployment model](#deployment-model)). +- `Platform Operators`: These are either human users or technical systems that are responsible for managing an OpenControlPlane platform installation. While they may operate on any cluster, their primary focus is on the `PlatformCluster` and `WorkloadCluster`. ## Domain A `ServiceProvider` enables platform operators to offer managed `DomainServices` to end users. A `DomainService` is a third-party service that delivers its functionality to end users through a `DomainServiceAPI`. -For example, consider an openMCP installation that aims to provide [Crossplane](https://www.crossplane.io/) as a managed service to its end users. Let's assume that end users specifically want to use the `Object` API of [provider-kubernetes](https://github.com/crossplane-contrib/provider-kubernetes), to create Kubernetes objects on their own Kubernetes clusters without the need to manage Crossplane themselves. +For example, consider an OpenControlPlane installation that aims to provide [Crossplane](https://www.crossplane.io/) as a managed service to its end users. Let's assume that end users specifically want to use the `Object` API of [provider-kubernetes](https://github.com/crossplane-contrib/provider-kubernetes), to create Kubernetes objects on their own Kubernetes clusters without the need to manage Crossplane themselves. If we map this to the terminology of a `DomainService` and `DomainServiceAPI`: diff --git a/docs/users/ecosystem.md b/docs/users/ecosystem.md new file mode 100644 index 0000000..e1482c8 --- /dev/null +++ b/docs/users/ecosystem.md @@ -0,0 +1,144 @@ +--- +sidebar_position: 2 +--- + +# Ecosystem + +OpenControlPlane is built on top of amazing open-source projects from the cloud native ecosystem. Here are the key projects that power our platform. + +
+ +
+
+ Kubernetes +

Kubernetes

+
+
+ The foundation of OpenControlPlane. We extend the Kubernetes API through Custom Resource Definitions (CRDs), enabling you to configure infrastructure, services, and applications using the same familiar API. +
+ +
+ +
+
+ Crossplane +

Crossplane

+
+
+ A CNCF project that orchestrates anything through Kubernetes. Enable Crossplane as a service provider to give your users access to the rich ecosystem of Crossplane providers. +
+ +
+
We endorse using these Crossplane providers:
+
+ AWS + Azure + GCP + IBM Cloud + SAP BTP +
+
+
+ +
+
+ Gardener +

Gardener

+
+
+ Delivers fully-managed Kubernetes clusters at scale across AWS, Azure, GCP, OpenStack, and more. Use Gardener as a cluster provider in OpenControlPlane for automated cluster management. +
+ +
+ +
+
+ Flux +

Flux

+
+
+ Continuous and progressive delivery for Kubernetes. Enable Flux to provide GitOps capabilities in your Managed Control Planes, allowing declarative infrastructure management from Git. +
+ +
+ +
+
+ Kyverno +

Kyverno

+
+
+ Policy-as-Code for Kubernetes and cloud native environments. Define team-internal and organization-wide policies to establish security standards and corporate compliance requirements. +
+ +
+ +
+
+ External Secrets +

External Secrets

+
+
+ Integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, and more. Automatically sync secrets into Kubernetes or push generated secrets to external systems. +
+ +
+ +
+
+ Open Component Model +

Open Component Model

+
+
+ An open standard for describing software artifacts and lifecycle metadata in a technology-agnostic way. Used by OpenControlPlane to package and deliver components reliably to any environment. +
+ +
+ +
+
+ Landscaper +

Landscaper

+
+
+ Describes, installs, and maintains cloud-native landscapes. Activate as a service provider to simplify the rollout of complex software products for your users with declarative installations. +
+ +
+ +
+ +## Why These Projects? + +All of these projects share OpenControlPlane's commitment to: + +- **Open Standards**: Built on Kubernetes and cloud native principles +- **Extensibility**: Designed to be extended and customized +- **Declarative Management**: Infrastructure and services as code +- **Community-Driven**: Active CNCF and open-source communities + +By building on these proven foundations, OpenControlPlane provides a robust, scalable platform for managing your cloud infrastructure and services. diff --git a/docs/users/getting-started/01-onboard.md b/docs/users/getting-started/01-onboard.md new file mode 100644 index 0000000..96013f4 --- /dev/null +++ b/docs/users/getting-started/01-onboard.md @@ -0,0 +1,212 @@ +--- +sidebar_position: 1 +--- + +# 1. Onboard + +This guide walks you through creating the foundational resources for your OpenControlPlane setup: Project, Workspace, and ControlPlane. + +## Understanding the Hierarchy + +OpenControlPlane organizes resources in a three-level hierarchy: + +```mermaid +flowchart TD + subgraph OnboardingAPI["Onboarding API"] + P["Project
platform-team"] + + subgraph NS1["project-platform-team"] + W1["Workspace
dev"] + W2["Workspace
prod"] + end + + subgraph NS2["project-platform-team--ws-dev"] + M1["ControlPlane
my-controlplane"] + M2["ControlPlane
another-cp"] + end + + subgraph NS3["project-platform-team--ws-prod"] + M3["ControlPlane
prod-cp"] + end + end + + P --> W1 + P --> W2 + W1 --> M1 + W1 --> M2 + W2 --> M3 + + style P fill:#2CE0BF,stroke:#07838F,color:#012931 + style W1 fill:#C2FCEE,stroke:#049F9A,color:#02414C + style W2 fill:#C2FCEE,stroke:#049F9A,color:#02414C + style M1 fill:#fff,stroke:#07838F,color:#02414C + style M2 fill:#fff,stroke:#07838F,color:#02414C + style M3 fill:#fff,stroke:#07838F,color:#02414C +``` + +- **Project** — Top-level organization unit (team, department, or org) +- **Workspace** — Environment separation within a project (dev, staging, prod) +- **ControlPlane** — Your actual Kubernetes API endpoint with its own data store + +## Prerequisites + +Before you begin, ensure you have: + +| Requirement | Description | +|-------------|-------------| +| **Onboarding API access** | Your platform operator provides the API endpoint and credentials | +| **kubectl** | Version 1.25 or later ([install guide](https://kubernetes.io/docs/tasks/tools/)) | +| **kubeconfig** | Configured to connect to the Onboarding API | + +:::tip Platform Access +If you don't have access to an OpenControlPlane installation, contact your platform operator. Operators can follow the [Bootstrapping Guide](../../operators/00-overview.md) to set up a new environment. +::: + +--- + +## Step 1: Create a Project + +A `Project` is the starting point of your ControlPlane journey. It's a logical grouping of `Workspaces` and `ControlPlanes`. Use a Project to represent an organization, department, team, or any other logical grouping. + +```yaml +apiVersion: core.openmcp.cloud/v1alpha1 +kind: Project +metadata: + name: platform-team + annotations: + openmcp.cloud/display-name: Platform Team +spec: + members: + - kind: User + name: first.user@example.com + roles: + - admin + - kind: User + name: second.user@example.com + roles: + - view +``` + +Apply it to the Onboarding API: + +```bash +kubectl apply -f project.yaml +``` + +--- + +## Step 2: Create a Workspace + +A `Workspace` is a logical grouping of `ControlPlanes`. Use Workspaces to represent environments (dev, staging, prod) or other organizational boundaries. + +```yaml +apiVersion: core.openmcp.cloud/v1alpha1 +kind: Workspace +metadata: + name: dev + namespace: project-platform-team + annotations: + openmcp.cloud/display-name: Platform Team - Dev +spec: + members: + - kind: User + name: first.user@example.com + roles: + - admin + - kind: User + name: second.user@example.com + roles: + - view +``` + +:::info Namespace Convention +Workspaces live in a namespace named `project-`. For example, a Workspace in the `platform-team` Project goes in the `project-platform-team` namespace. +::: + +```bash +kubectl apply -f workspace.yaml +``` + +--- + +## Step 3: Create a ControlPlane + +The `ControlPlane` resource is the heart of OpenControlPlane. Each ControlPlane has its own Kubernetes API endpoint and data store. You can use the `iam` property to define who can access the ControlPlane. + +```yaml +apiVersion: core.openmcp.cloud/v2alpha1 +kind: ManagedControlPlaneV2 +metadata: + name: my-controlplane + namespace: project-platform-team--ws-dev +spec: + iam: + oidc: + defaultProvider: + roleBindings: + - roleRefs: + - kind: ClusterRole + name: cluster-admin + subjects: + - kind: User + name: first.user@example.com + - kind: User + name: second.user@example.com + tokens: + - name: ci-service-token + roleRefs: + - kind: ClusterRole + name: cluster-admin +``` + +:::info Namespace Convention +ControlPlanes live in a namespace named `project---ws-`. For example, a ControlPlane in the `dev` Workspace of the `platform-team` Project goes in `project-platform-team--ws-dev`. +::: + +### Authentication & Authorization + +The `spec.iam` section controls who can access your ControlPlane and what they can do. + +#### Human Authentication (OIDC) + +For users authenticating through your identity provider: + +```yaml +iam: + oidc: + defaultProvider: + roleBindings: + - roleRefs: + - kind: ClusterRole + name: cluster-admin + subjects: + - kind: User + name: alice@example.com +``` + +OpenControlPlane creates ClusterRoleBindings in your ControlPlane based on these specifications. + +#### Machine Authentication (Tokens) + +For CI/CD pipelines and service accounts: + +```yaml +iam: + tokens: + - name: ci-service-token + roleRefs: + - kind: ClusterRole + name: cluster-admin +``` + +For token-based auth, a ServiceAccount is automatically generated and bound to the specified roles. + +```bash +kubectl apply -f controlplane.yaml +``` + +--- + +## Next Steps + +Continue to [2. Connect](./02-connect.md) to retrieve credentials and access your ControlPlane. diff --git a/docs/users/getting-started/02-connect.md b/docs/users/getting-started/02-connect.md new file mode 100644 index 0000000..371d7ae --- /dev/null +++ b/docs/users/getting-started/02-connect.md @@ -0,0 +1,72 @@ +--- +sidebar_position: 2 +--- + +# 2. Connect + +This guide shows you how to retrieve credentials and connect to your ControlPlane using kubectl. + +## Check ControlPlane Status + +First, verify your ControlPlane is ready: + +```bash +kubectl get managedcontrolplanev2 my-controlplane -n project-platform-team--ws-dev +``` + +Wait until the ControlPlane shows a ready status. The `status.access` field contains references to your credentials. + +## Retrieve Your Kubeconfig + +The ControlPlane creates secrets containing kubeconfig files for each authentication method you configured. + +### For OIDC Authentication (Human Users) + +```bash +# Get the secret name from status +SECRET_NAME=$(kubectl get managedcontrolplanev2 my-controlplane \ + -n project-platform-team--ws-dev \ + -o jsonpath='{.status.access.oidc.secretRef.name}') + +# Retrieve and decode the kubeconfig +kubectl get secret $SECRET_NAME -n project-platform-team--ws-dev \ + -o jsonpath='{.data.kubeconfig}' | base64 -d > my-controlplane-oidc.kubeconfig +``` + +### For Token Authentication (Machine Users) + +```bash +# Get the secret name from status +SECRET_NAME=$(kubectl get managedcontrolplanev2 my-controlplane \ + -n project-platform-team--ws-dev \ + -o jsonpath='{.status.access.tokens[0].secretRef.name}') + +# Retrieve and decode the kubeconfig +kubectl get secret $SECRET_NAME -n project-platform-team--ws-dev \ + -o jsonpath='{.data.kubeconfig}' | base64 -d > my-controlplane-token.kubeconfig +``` + +## Verify Access + +Test your connection to the ControlPlane: + +```bash +# Using the retrieved kubeconfig +kubectl --kubeconfig=my-controlplane-oidc.kubeconfig get namespaces +``` + +You should see the default Kubernetes namespaces, confirming your ControlPlane is accessible. + +:::tip Set as Default Context +To use your ControlPlane as the default context: +```bash +export KUBECONFIG=my-controlplane-oidc.kubeconfig +kubectl get namespaces +``` +::: + +--- + +## Next Steps + +Continue to [3. Configure](./03-configure.md) to install managed services and extend your ControlPlane functionality. diff --git a/docs/users/getting-started/03-configure.md b/docs/users/getting-started/03-configure.md new file mode 100644 index 0000000..3b5be10 --- /dev/null +++ b/docs/users/getting-started/03-configure.md @@ -0,0 +1,67 @@ +--- +sidebar_position: 3 +--- + +# 3. Configure + +This guide shows you how to install managed services in your ControlPlane to extend its functionality. + +## Install Managed Services + +You can install managed services in your ControlPlane to add capabilities like infrastructure management and workload orchestration. + +### Crossplane + +[Crossplane](https://www.crossplane.io/) enables you to manage cloud infrastructure using Kubernetes-style declarative configuration. + +To install Crossplane, create a `Crossplane` resource in the same namespace as your ControlPlane: + +```yaml +apiVersion: crossplane.services.openmcp.cloud/v1alpha1 +kind: Crossplane +metadata: + name: my-controlplane + namespace: project-platform-team--ws-dev +spec: + version: v1.20.0 + providers: + - name: provider-kubernetes + version: v0.16.0 +``` + +The `name` must match your ControlPlane name. + +```bash +kubectl apply -f crossplane.yaml +``` + +### Landscaper + +[Landscaper](https://github.com/gardener/landscaper) manages the installation, updates, and uninstallation of cloud-native workloads with complex dependency chains. + +To install Landscaper, create a `Landscaper` resource: + +```yaml +apiVersion: landscaper.services.openmcp.cloud/v1alpha2 +kind: Landscaper +metadata: + name: my-controlplane + namespace: project-platform-team--ws-dev +spec: + version: v0.142.0 +``` + +```bash +kubectl apply -f landscaper.yaml +``` + +--- + +## Next Steps + +Congratulations! You have a working ControlPlane with managed services. Here's what you can explore next: + +- **[What is a Managed Control Plane?](../concepts/managed-control-plane.md)** — Deeper understanding of ControlPlanes +- **[Service Providers](../concepts/service-provider.md)** — How managed services work +- **[Crossplane Service Provider](https://github.com/openmcp-project/service-provider-crossplane)** — Manage cloud infrastructure +- **[Landscaper Service Provider](https://github.com/openmcp-project/service-provider-landscaper)** — Orchestrate complex workloads diff --git a/docs/users/getting-started/_category_.json b/docs/users/getting-started/_category_.json new file mode 100644 index 0000000..69e4a57 --- /dev/null +++ b/docs/users/getting-started/_category_.json @@ -0,0 +1,8 @@ +{ + "label": "Order Control Plane", + "position": 1, + "link": { + "type": "generated-index", + "description": "Step-by-step guide to create and configure your first Managed Control Plane." + } +} diff --git a/docusaurus.config.ts b/docusaurus.config.ts index 36b4cef..149caca 100644 --- a/docusaurus.config.ts +++ b/docusaurus.config.ts @@ -5,7 +5,7 @@ import type * as Preset from '@docusaurus/preset-classic'; // This runs in Node.js - Don't use client-side code here (browser APIs, JSX...) const config: Config = { - title: 'Open Managed Control Plane (openMCP)', + title: 'Open Control Plane', tagline: 'Part of ApeiroRA and NeoNephos.', favicon: 'img/favicon.ico', @@ -72,39 +72,38 @@ const config: Config = { themeConfig: { // Replace with your project's social card - image: 'img/docusaurus-social-card.jpg', + image: 'img/co_axolotl.png', navbar: { - title: 'Open Managed Control Plane (openMCP)', + title: 'Open Control Plane', logo: { - alt: 'My Site Logo', - src: 'img/logo.svg', + alt: 'Open Control Plane Logo', + src: 'img/co_axolotl_mirrored.png', }, items: [ - { - type: 'docSidebar', - sidebarId: 'about', - position: 'left', - label: 'About OpenMCP', - }, { type: 'docSidebar', sidebarId: 'userDocs', position: 'left', - label: 'End-users', + label: 'Get Started', }, { type: 'docSidebar', sidebarId: 'operatorDocs', position: 'left', - label: 'Operators', + label: 'Run Your Platform', }, { type: 'docSidebar', sidebarId: 'developerDocs', position: 'left', - label: 'Developers', + label: 'Build Together', + }, + { + type: 'docSidebar', + sidebarId: 'communitySidebar', + position: 'right', + label: 'Community', }, - {to: '/adrs', label: 'ADRs', position: 'left'}, { href: 'https://github.com/openmcp-project/docs', label: 'GitHub', @@ -134,6 +133,10 @@ const config: Config = { { label: 'NeoNephos', href: 'https://neonephos.org/', + }, + { + label: 'Crossplane Provider Community @ SAP', + href: 'https://github.com/SAP/crossplane-provider-docs', }, ], }, @@ -156,7 +159,7 @@ const config: Config = { }, ], copyright: ` - Copyright © ${new Date().getFullYear()} SAP SE or an SAP affiliate company and openMCP contributors. + Copyright © ${new Date().getFullYear()} SAP SE or an SAP affiliate company and openControlPlane contributors.
This site is hosted by GitHub Pages. Please see the GitHub Privacy Statement for any information how GitHub processes your personal data. diff --git a/package-lock.json b/package-lock.json index cc2c13e..eea2416 100644 --- a/package-lock.json +++ b/package-lock.json @@ -161,6 +161,7 @@ "resolved": "https://registry.npmjs.org/@algolia/client-search/-/client-search-5.32.0.tgz", "integrity": "sha512-kmK5nVkKb4DSUgwbveMKe4X3xHdMsPsOVJeEzBvFJ+oS7CkBPmpfHAEq+CcmiPJs20YMv6yVtUT9yPWL5WgAhg==", "license": "MIT", + "peer": true, "dependencies": { "@algolia/client-common": "5.32.0", "@algolia/requester-browser-xhr": "5.32.0", @@ -321,6 +322,7 @@ "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.28.0.tgz", "integrity": "sha512-UlLAnTPrFdNGoFtbSXwcGFQBtQZJCNjaN6hQNP3UPvuNXT1i82N26KL3dZeIpNalWywr9IuQuncaAfUaS1g6sQ==", "license": "MIT", + "peer": true, "dependencies": { "@ampproject/remapping": "^2.2.0", "@babel/code-frame": "^7.27.1", @@ -2155,6 +2157,7 @@ } ], "license": "MIT", + "peer": true, "engines": { "node": ">=18" }, @@ -2177,6 +2180,7 @@ } ], "license": "MIT", + "peer": true, "engines": { "node": ">=18" } @@ -2257,6 +2261,7 @@ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.1.0.tgz", "integrity": "sha512-8sLjZwK0R+JlxlYcTuVnyT2v+htpdrjDOKuMcOVdYjt52Lh8hWRYpxBPoKx/Zg+bcjc3wx6fmQevMmUztS/ccA==", "license": "MIT", + "peer": true, "dependencies": { "cssesc": "^3.0.0", "util-deprecate": "^1.0.2" @@ -2620,6 +2625,7 @@ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.1.0.tgz", "integrity": "sha512-8sLjZwK0R+JlxlYcTuVnyT2v+htpdrjDOKuMcOVdYjt52Lh8hWRYpxBPoKx/Zg+bcjc3wx6fmQevMmUztS/ccA==", "license": "MIT", + "peer": true, "dependencies": { "cssesc": "^3.0.0", "util-deprecate": "^1.0.2" @@ -3480,6 +3486,7 @@ "resolved": "https://registry.npmjs.org/@docusaurus/plugin-content-docs/-/plugin-content-docs-3.8.1.tgz", "integrity": "sha512-oByRkSZzeGNQByCMaX+kif5Nl2vmtj2IHQI2fWjCfCootsdKZDPFLonhIp5s3IGJO7PLUfe0POyw0Xh/RrGXJA==", "license": "MIT", + "peer": true, "dependencies": { "@docusaurus/core": "3.8.1", "@docusaurus/logger": "3.8.1", @@ -4102,6 +4109,7 @@ "resolved": "https://registry.npmjs.org/@mdx-js/react/-/react-3.1.0.tgz", "integrity": "sha512-QjHtSaoameoalGnKDT3FoIl4+9RwyTmo9ZJGBdLOks/YOiWHoRDI3PUwEzOE7kEmGcV3AFcp9K6dYu9rEuKLAQ==", "license": "MIT", + "peer": true, "dependencies": { "@types/mdx": "^2.0.0" }, @@ -4414,6 +4422,7 @@ "resolved": "https://registry.npmjs.org/@svgr/core/-/core-8.1.0.tgz", "integrity": "sha512-8QqtOQT5ACVlmsvKOJNEaWmRPmcojMOzCz4Hs2BGG/toAp/K38LcsMRyLp349glq5AzJbCEeimEoxaX6v/fLrA==", "license": "MIT", + "peer": true, "dependencies": { "@babel/core": "^7.21.3", "@svgr/babel-preset": "8.1.0", @@ -5056,6 +5065,7 @@ "resolved": "https://registry.npmjs.org/@types/react/-/react-19.1.8.tgz", "integrity": "sha512-AwAfQ2Wa5bCx9WP8nZL2uMZWod7J7/JSplxbTmBQ5ms6QpqNYm672H0Vu9ZVKVngQ+ii4R/byguVEUZQyeg44g==", "license": "MIT", + "peer": true, "dependencies": { "csstype": "^3.0.2" } @@ -5395,6 +5405,7 @@ "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz", "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", "license": "MIT", + "peer": true, "bin": { "acorn": "bin/acorn" }, @@ -5462,6 +5473,7 @@ "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.17.1.tgz", "integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==", "license": "MIT", + "peer": true, "dependencies": { "fast-deep-equal": "^3.1.3", "fast-uri": "^3.0.1", @@ -5507,6 +5519,7 @@ "resolved": "https://registry.npmjs.org/algoliasearch/-/algoliasearch-5.32.0.tgz", "integrity": "sha512-84xBncKNPBK8Ae89F65+SyVcOihrIbm/3N7to+GpRBHEUXGjA3ydWTMpcRW6jmFzkBQ/eqYy/y+J+NBpJWYjBg==", "license": "MIT", + "peer": true, "dependencies": { "@algolia/client-abtesting": "5.32.0", "@algolia/client-analytics": "5.32.0", @@ -5972,6 +5985,7 @@ } ], "license": "MIT", + "peer": true, "dependencies": { "caniuse-lite": "^1.0.30001726", "electron-to-chromium": "^1.5.173", @@ -6255,6 +6269,7 @@ "resolved": "https://registry.npmjs.org/chevrotain/-/chevrotain-11.0.3.tgz", "integrity": "sha512-ci2iJH6LeIkvP9eJW6gpueU8cnZhv85ELY8w8WiFtNjMHA5ad6pQLaJo9mEly/9qUyCpvqX8/POVUTf18/HFdw==", "license": "Apache-2.0", + "peer": true, "dependencies": { "@chevrotain/cst-dts-gen": "11.0.3", "@chevrotain/gast": "11.0.3", @@ -6965,6 +6980,7 @@ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.1.0.tgz", "integrity": "sha512-8sLjZwK0R+JlxlYcTuVnyT2v+htpdrjDOKuMcOVdYjt52Lh8hWRYpxBPoKx/Zg+bcjc3wx6fmQevMmUztS/ccA==", "license": "MIT", + "peer": true, "dependencies": { "cssesc": "^3.0.0", "util-deprecate": "^1.0.2" @@ -7284,6 +7300,7 @@ "resolved": "https://registry.npmjs.org/cytoscape/-/cytoscape-3.32.1.tgz", "integrity": "sha512-dbeqFTLYEwlFg7UGtcZhCCG/2WayX72zK3Sq323CEX29CY81tYfVhw1MIdduCtpstB0cTOhJswWlM/OEB3Xp+Q==", "license": "MIT", + "peer": true, "engines": { "node": ">=0.10" } @@ -7693,6 +7710,7 @@ "resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz", "integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==", "license": "ISC", + "peer": true, "engines": { "node": ">=12" } @@ -8851,6 +8869,7 @@ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "license": "MIT", + "peer": true, "dependencies": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", @@ -13451,6 +13470,7 @@ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "license": "MIT", + "peer": true, "dependencies": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", @@ -14025,6 +14045,7 @@ } ], "license": "MIT", + "peer": true, "dependencies": { "nanoid": "^3.3.11", "picocolors": "^1.1.1", @@ -14928,6 +14949,7 @@ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-7.1.0.tgz", "integrity": "sha512-8sLjZwK0R+JlxlYcTuVnyT2v+htpdrjDOKuMcOVdYjt52Lh8hWRYpxBPoKx/Zg+bcjc3wx6fmQevMmUztS/ccA==", "license": "MIT", + "peer": true, "dependencies": { "cssesc": "^3.0.0", "util-deprecate": "^1.0.2" @@ -15747,6 +15769,7 @@ "resolved": "https://registry.npmjs.org/react/-/react-19.1.0.tgz", "integrity": "sha512-FS+XFBNvn3GTAWq26joslQgWNoFu08F4kl0J4CgdNKADkdSGXQyTCnKteIAJy96Br6YbpEU1LSzV5dYtjMkMDg==", "license": "MIT", + "peer": true, "engines": { "node": ">=0.10.0" } @@ -15756,6 +15779,7 @@ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.1.0.tgz", "integrity": "sha512-Xs1hdnE+DyKgeHJeJznQmYMIBG3TKIHJJT95Q58nHLSrElKlGQqDTR2HQ9fx5CN/Gk6Vh/kupBTDLU11/nDk/g==", "license": "MIT", + "peer": true, "dependencies": { "scheduler": "^0.26.0" }, @@ -15811,6 +15835,7 @@ "resolved": "https://registry.npmjs.org/@docusaurus/react-loadable/-/react-loadable-6.0.0.tgz", "integrity": "sha512-YMMxTUQV/QFSnbgrP3tjDzLHRg7vsbMn8e9HAa8o/1iXoiomo48b7sk/kkmWEuWNDPJVlKSJRB6Y2fHqdJk+SQ==", "license": "MIT", + "peer": true, "dependencies": { "@types/react": "*" }, @@ -15839,6 +15864,7 @@ "resolved": "https://registry.npmjs.org/react-router/-/react-router-5.3.4.tgz", "integrity": "sha512-Ys9K+ppnJah3QuaRiLxk+jDWOR1MekYQrlytiXxC1RyfbdsZkS5pvKAzCCr031xHixZwpnsYNT5xysdFHQaYsA==", "license": "MIT", + "peer": true, "dependencies": { "@babel/runtime": "^7.12.13", "history": "^4.9.0", @@ -17680,6 +17706,7 @@ "integrity": "sha512-hjcS1mhfuyi4WW8IWtjP7brDrG2cuDZukyrYrSauoXGNgx0S7zceP07adYkJycEr56BOUTNPzbInooiN3fn1qw==", "devOptional": true, "license": "Apache-2.0", + "peer": true, "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" @@ -18027,6 +18054,7 @@ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "license": "MIT", + "peer": true, "dependencies": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", @@ -18278,6 +18306,7 @@ "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.100.1.tgz", "integrity": "sha512-YJB/ESPUe2Locd0NKXmw72Dx8fZQk1gTzI6rc9TAT4+Sypbnhl8jd8RywB1bDsDF9Dy1RUR7gn3q/ZJTd0OZZg==", "license": "MIT", + "peer": true, "dependencies": { "@types/eslint-scope": "^3.7.7", "@types/estree": "^1.0.8", diff --git a/sidebars.ts b/sidebars.ts index 2b8608f..b9fd52e 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -13,10 +13,10 @@ import type {SidebarsConfig} from '@docusaurus/plugin-content-docs'; Create as many sidebars as you want. */ const sidebars: SidebarsConfig = { - about: [{type: 'autogenerated', dirName: 'about'}], userDocs: [{type: 'autogenerated', dirName: 'users'}], operatorDocs: [{type: 'autogenerated', dirName: 'operators'}], developerDocs: [{type: 'autogenerated', dirName: 'developers'}], + communitySidebar: [{type: 'autogenerated', dirName: 'community'}], }; export default sidebars; diff --git a/src/css/custom.css b/src/css/custom.css index 2bc6a4c..dba21e4 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -4,27 +4,1912 @@ * work well for content-centric websites. */ -/* You can override the default Infima variables here. */ +/* ===== Design tokens & IFM vars ===== */ :root { - --ifm-color-primary: #2e8555; - --ifm-color-primary-dark: #29784c; - --ifm-color-primary-darker: #277148; - --ifm-color-primary-darkest: #205d3b; - --ifm-color-primary-light: #33925d; - --ifm-color-primary-lighter: #359962; - --ifm-color-primary-lightest: #3cad6e; + /* Landing Page Core Colors */ + --lp-c-white: #ffffff; + --lp-c-black: #000000; + --lp-c-gray-1: #515c67; + --lp-c-gray-2: #414853; + --lp-c-gray-3: #32363f; + --lp-c-gray-soft: rgba(101, 117, 133, 0.16); + + /* Landing Page Dark Theme Colors */ + --lp-c-bg: #1b1b1f; + --lp-c-bg-alt: #161618; + --lp-c-bg-elv: #202127; + --lp-c-bg-soft: #202127; + --lp-c-border: #3c3f44; + --lp-c-divider: #2e2e32; + --lp-c-gutter: #000000; + --lp-c-text-1: #dfdfd6; + --lp-c-text-2: #98989f; + --lp-c-text-3: #6a6a71; + + /* Brand Colors */ + --lp-c-green-1: #1e8f95; + --lp-c-green-2: #32bcac; + --lp-c-green-3: #3fdec0; + --lp-c-green-soft: rgba(16, 185, 129, 0.16); + --lp-c-brand-1: var(--lp-c-green-1); + --lp-c-brand-2: var(--lp-c-green-2); + --lp-c-brand-3: var(--lp-c-green-3); + --lp-c-brand-soft: var(--lp-c-green-soft); + + /* Role-based Teal Spectrum */ + --teal-2: #C2FCEE; + --teal-4: #2CE0BF; + --teal-6: #049F9A; + --teal-7: #07838F; + --teal-10: #02414C; + --teal-11: #012931; + + /* Role Colors - Light to Dark gradient for End User -> Operator -> Contributor */ + --role-enduser-primary: var(--teal-7); + --role-enduser-secondary: var(--teal-10); + --role-operator-primary: var(--teal-10); + --role-operator-secondary: var(--teal-11); + --role-contributor-primary: transparent; + --role-contributor-secondary: transparent; + --role-contributor-border: var(--teal-11); + + /* Button Colors */ + --lp-button-brand-bg: #009f76; + --lp-button-brand-border: transparent; + --lp-button-brand-text: var(--lp-c-white); + --lp-button-brand-hover-bg: var(--lp-c-brand-2); + --lp-button-alt-bg: var(--lp-c-gray-3); + --lp-button-alt-text: var(--lp-c-text-1); + --lp-button-alt-hover-bg: var(--lp-c-gray-2); + + /* Hero Specific */ + --lp-home-hero-name-color: transparent; + --lp-home-hero-name-background: -webkit-linear-gradient(120deg, #1e8f95 30%, #3fdec0); + --lp-home-hero-image-background-image: linear-gradient(-45deg, #1e8f95 50%, #3fdec0 50%); + --lp-home-hero-image-filter: blur(68px); + + /* Typography */ + --lp-font-family-base: + "Inter", ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", + "Segoe UI Symbol", "Noto Color Emoji"; + --lp-font-family-mono: ui-monospace, "Menlo", "Monaco", "Consolas", "Liberation Mono", "Courier New", monospace; + + /* ===== BASE IFM/Docusaurus variables ===== */ + --ifm-color-primary: #1e8f95; + --ifm-color-primary-dark: #1a7f84; + --ifm-color-primary-darker: #19777c; + --ifm-color-primary-darkest: #146166; + --ifm-color-primary-light: #229fa6; + --ifm-color-primary-lighter: #24a7ae; + --ifm-color-primary-lightest: #2bbdc5; --ifm-code-font-size: 95%; --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1); + --ifm-table-cell-padding: 0.5rem; + --ifm-hr-margin-vertical: 0.5rem; + --ifm-list-margin: 0.5rem; + --ifm-navbar-padding-vertical: 0; + --ifm-navbar-height: 3rem; + --ifm-blockquote-border-left-width: 6px; + --doc-sidebar-width: 250px !important; + --ifm-spacing-horizontal: 8px; + --ifm-list-left-padding: 1.5rem; + --prism-background-color: #1a2534; + --ifm-h1-font-size: 2.5rem; + --ifm-h2-font-size: 1.8rem; + --ifm-h4-font-size: 1.2rem; + --ifm-h6-font-size: 1rem; + + /* ===== navbar variables ===== */ + --c-accent: #55e9e9; + --c-accent-contrast: #081012; + --c-text-strong: #0a0f12; + --c-text: #1b242a; + --c-text-dim: #5a6a73; + --c-bg: #ffffff; + --c-bg-elev: #f6f8f9; + --c-card: #ffffff; + --c-sep: #e7ecef; + + --shadow-lg: 0 20px 60px rgba(4, 13, 18, 0.18); + --shadow-md: 0 10px 30px rgba(4, 13, 18, 0.14); + + --container: 1200px; + --pad-x: 24px; + --pad-x-lg: 32px; + + --h1: 56px; + --h1-lh: 1.05; + --h2: 36px; + --h2-lh: 1.15; + + --bg-rgb: 11, 15, 19; + --glass-alpha: 0.99; + --nav-border-scrolled: rgba(255, 255, 255, 0.16); + --navbar-height-fallback: 64px; + --navbar-bg-blur: 12px; + --container-nav: 1320px; + --nav-gap: clamp(22px, 3.6vw, 46px); + --nav-link-size: clamp(14px, 1.05vw, 16px); + --nav-pad-x: 24px; } /* For readability concerns, you should choose a lighter palette in dark mode. */ [data-theme='dark'] { - --ifm-color-primary: #25c2a0; - --ifm-color-primary-dark: #21af90; - --ifm-color-primary-darker: #1fa588; - --ifm-color-primary-darkest: #1a8870; - --ifm-color-primary-light: #29d5b0; - --ifm-color-primary-lighter: #32d8b4; - --ifm-color-primary-lightest: #4fddbf; + --ifm-color-primary: #3fdec0; + --ifm-color-primary-dark: #29d3b3; + --ifm-color-primary-darker: #22ceac; + --ifm-color-primary-darkest: #1aaf93; + --ifm-color-primary-light: #55e3c7; + --ifm-color-primary-lighter: #62e6cc; + --ifm-color-primary-lightest: #87edd9; --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3); } + +[data-theme='dark'] .image-container > .image-src { + filter: drop-shadow(-2px 4px 6px rgba(0, 0, 0, 0.3)); +} + +/* ===== Navbar: transparent-on-top effect (landing page) ===== */ +.navbar { + transition: background-color 0.5s, border-bottom-color 0.5s, box-shadow 0.5s; +} + +.navbar--transparent { + background-color: transparent !important; + border-bottom-color: transparent !important; + box-shadow: none !important; +} + +/* ===== Landing Page: Hero ===== */ +.lp-home-top { + height: 60vh; + width: 100vw; + object-fit: cover; + position: absolute; + opacity: 0; + z-index: 0; +} + +.lp-home { + background-color: var(--ifm-background-color); + min-height: 10vh; + gap: 24px; +} + +.lp-home .container { + display: flex; + margin: 0 auto; + max-width: 1152px; + flex-direction: column; + text-align: center; + gap: 64px; +} + +.lp-home .flex-container { + display: flex; + flex-direction: row; + justify-content: center; + align-items: center; + gap: 16px; + max-width: 1152px; + width: 100%; + height: 60vh; + margin: 0 auto; + padding: 0 24px; +} + +.lp-home .main { + flex: 1 1 100%; + order: 2; +} + +.lp-home .heading { + line-height: 1.2; + font-weight: 600; + margin-bottom: 24px; +} + +.lp-home .name { + font-size: 48px; + display: block; +} + +.lp-home .name.clip { + background: var(--lp-home-hero-name-background); + -webkit-background-clip: text; + background-clip: text; + -webkit-text-fill-color: var(--lp-home-hero-name-color); +} + +.lp-home .text { + font-size: 28px !important; + font-weight: 500; + color: var(--ifm-font-color-base); + display: block; + margin-top: 8px; +} + +.lp-home .tagline { + color: var(--lp-c-text-2); + font-size: 20px; + line-height: 30px; + margin: 0 auto 32px; + max-width: 600px; +} + +.lp-home .definition { + color: var(--c-text-dim); + font-size: 14px; + line-height: 22px; + margin: 24px auto 0; + max-width: 850px; + opacity: 0.85; + text-align: center; +} + +.lp-home .actions { + display: flex; + flex-wrap: wrap; + gap: 12px; + justify-content: center; + z-index: 1; +} + +/* Hero media */ +.lp-home .image { + order: 1; + flex: 1 1 100%; + justify-content: center; + align-items: center; +} + +.lp-home .image-container { + position: relative; + width: 450px; + height: 450px; + margin: 0 auto; +} + +.lp-home .image-bg { + position: absolute; + inset: 0; + background-image: var(--lp-home-hero-image-background-image); + filter: var(--lp-home-hero-image-filter); + border-radius: 50%; + opacity: 0.6; +} + +.image-container > .image-src { + position: absolute; + left: 0; + top: 0; + width: 100%; + height: 100%; + object-fit: contain; + z-index: 1; + transform-origin: 50% 55%; + animation: lp-axolotl-float 10s ease-in-out infinite; + filter: drop-shadow(-2px 4px 6px rgba(0, 0, 0, 0.25)); +} + +@keyframes lp-axolotl-float { + 0% { + transform: translate(0, 0) rotate(0deg); + } + 25% { + transform: translate(3px, -4px) rotate(-0.8deg); + } + 50% { + transform: translate(-3px, 3px) rotate(0.7deg); + } + 75% { + transform: translate(2px, 4px) rotate(0.5deg); + } + 100% { + transform: translate(0, 0) rotate(0deg); + } +} + +@media (prefers-reduced-motion: reduce) { + .image-container > .image-src { + animation: none; + } +} + +/* Hero media queries */ +@media (max-width: 480px) { + .lp-home .container { + gap: var(--nav-pad-x) !important; + } + .lp-home .flex-container { + flex-direction: column; + height: 80vh; + } + .lp-home .actions { + display: grid; + grid-template-columns: repeat(2, 1fr); + gap: 10px; + width: 100%; + } + .action a { + padding: 6px 8px; + font-size: 12px; + } +} + +@media (min-width: 481px) and (max-width: 959px) { + .lp-home .image-container { + width: 280px; + height: 280px; + } + .lp-home .flex-container { + flex-direction: column; + } +} + +@media (min-width: 768px) { + .lp-home .name { + font-size: 56px; + } + .lp-home .text { + font-size: 36px; + } +} + +@media (min-width: 960px) { + .lp-home .flex-container { + height: 50vh; + } + .lp-home .main { + flex: 1 1 50%; + order: 1; + text-align: left; + } + .lp-home .actions { + justify-content: flex-start; + } + .lp-home .image { + flex: 1 1 50%; + order: 2; + } +} + +/* ===== Action buttons ===== */ +.action p { + margin: 0; + padding: 0; +} + +.action a { + display: inline-flex; + align-items: center; + justify-content: center; + padding: 8px 16px; + line-height: 1.2; + height: auto; + min-width: 0; + white-space: nowrap; + border-radius: 12px; + font-size: 14px; + font-weight: 600; + text-decoration: none !important; + border: 1px solid transparent; + transition: + transform 0.2s ease, + box-shadow 0.2s ease, + background-color 0.25s ease, + color 0.25s ease; + width: 100%; +} + +/* Legacy brand/alt button styles for other buttons */ +.action.brand a { + background: var(--lp-c-green-3); + color: black; + border-color: transparent; +} + +.action.brand a:hover { + transform: translateY(-2px); + box-shadow: 0 4px 10px rgba(0, 0, 0, 0.25); + color: black; +} + +.action.brand a:active { + transform: translateY(0); + box-shadow: none; +} + +.action.alt a { + background: var(--lp-c-green-1); + color: black; + border-color: transparent; +} + +.action.alt a:hover { + transform: translateY(-2px); + box-shadow: 0 4px 10px rgba(0, 0, 0, 0.25); + color: black; +} + +.action.alt a:active { + transform: translateY(0); +} + +/* Role-specific hero button colors - MUST come after generic .action.alt a */ +.lp-home .action.alt a[href*="/users/"], +.lp-home .action a[href*="/users/"] { + background: var(--role-enduser-primary) !important; + color: white !important; + border-color: transparent; +} + +.lp-home .action.alt a[href*="/users/"]:hover, +.lp-home .action a[href*="/users/"]:hover { + background: var(--role-enduser-secondary) !important; + transform: translateY(-2px); + box-shadow: 0 4px 10px rgba(7, 131, 143, 0.4); + color: white !important; +} + +.lp-home .action.alt a[href*="/operators/"], +.lp-home .action a[href*="/operators/"] { + background: var(--role-operator-primary) !important; + color: white !important; + border-color: transparent; +} + +.lp-home .action.alt a[href*="/operators/"]:hover, +.lp-home .action a[href*="/operators/"]:hover { + background: var(--role-operator-secondary) !important; + transform: translateY(-2px); + box-shadow: 0 4px 10px rgba(2, 65, 76, 0.4); + color: white !important; +} + +.lp-home .action.alt a[href*="/developers/"], +.lp-home .action a[href*="/developers/"] { + background: transparent !important; + color: white !important; + border: none !important; +} + +.lp-home .action.alt a[href*="/developers/"]:hover, +.lp-home .action a[href*="/developers/"]:hover { + background: rgba(1, 41, 49, 0.3) !important; + transform: translateY(-2px); + box-shadow: 0 4px 10px rgba(1, 41, 49, 0.4); + color: white !important; +} + +.action a:active { + transform: translateY(0); + box-shadow: none; +} + +.lp-home .actions { + gap: 10px; +} + +/* ===== Feature cards ===== */ +.lp-features { + display: grid; + margin: 60px auto 0; + grid-template-columns: 1fr; + gap: 14px; +} + +.lp-features h3 { + font-size: 16px; + color: var(--lp-c-text-1) !important; +} + +.lp-feature-card { + border-radius: 12px; + padding: 16px 28px 24px 28px; + position: relative; + background-color: #f6f6f7; + border: 1px solid #f6f6f7; + transition: border-color 0.25s, background-color 0.25s; + overflow: hidden; + isolation: isolate; +} + +.lp-feature-card .mouse-glow { + position: absolute; + width: 420px; + height: 420px; + border-radius: 12px; + background: radial-gradient(circle, rgba(30, 143, 149, 0.45) 0%, rgba(194, 252, 238, 0.25) 60%, transparent 100%); + filter: blur(48px); + transform: translate(-50%, -50%); + z-index: 1; + opacity: 0; + transition: opacity 0.2s; + pointer-events: none; +} + +.lp-feature-card:hover .mouse-glow { + opacity: 0.95; +} + +.lp-feature-card > *:not(.mouse-glow) { + position: relative; + z-index: 2; +} + +@media (max-width: 640px) { + .lp-feature-card .mouse-glow { + display: none; + } +} + +.lp-feature-card img { + width: 72px; + height: 72px; + margin-bottom: 16px; +} + +.lp-feature-card p { + margin: 0; + color: var(--ifm-color-emphasis-700); + font-weight: 500; + line-height: 1.6; + font-size: 14px; +} + +@media (max-width: 480px) { + .lp-features { + grid-template-columns: 1fr; + gap: 20px; + width: 100%; + } + .lp-feature-card { + text-align: left; + } +} + +@media (min-width: 481px) and (max-width: 959px) { + .lp-features { + grid-template-columns: repeat(2, 1fr); + gap: 20px; + width: 100%; + } +} + +@media (min-width: 960px) { + .lp-features { + grid-template-columns: repeat(3, 1fr); + gap: 14px; + max-width: 1152px; + margin-left: auto; + margin-right: auto; + } +} + +@media (min-width: 1152px) { + .lp-feature-card { + padding: 12px 28px 20px 28px; + } +} + +/* ===== Get started section ===== */ +.get-started-section { + background-color: var(--ifm-background-color); + display: flex; + flex-direction: column; + justify-content: space-around; + align-items: center; + padding: 64px 24px; + height: 24vh; +} + +.gray-white { + background-color: var(--ifm-background-surface-color); +} + +/* ===== Open-source section ===== */ +.open-source-section { + display: flex; + flex-direction: column; + align-items: center; + padding: 24px 24px 48px; + background-color: var(--ifm-background-color); + color: var(--ifm-font-color-base); +} + +.open-source-section > .col { + max-width: 1152px; + width: 100%; + flex: none; +} + +.open-source-section p, +.open-source-section p b { + color: inherit; +} + +.open-source-section a { + color: var(--ifm-color-primary); +} + +.open-source-wrapper { + height: 16vh; + display: grid; + place-items: center; +} + +.typing-open-source { + width: 17.5ch; + white-space: nowrap; + overflow: hidden; + border-right: 3px solid; + font-family: monospace; + font-size: 4em; + opacity: 0; +} + +.typing-open-source.animate { + opacity: 1; + animation: typing 1.4s steps(17), blink 0.5s step-end infinite alternate; +} + +@keyframes typing { + from { + width: 0; + } +} + +@keyframes blink { + 50% { + border-color: transparent; + } +} + +.landingpage-section-title { + font-family: "72 Black", sans-serif; + text-align: center; + margin-bottom: 0; + width: 100%; +} + +/* ===== Fade-in utility ===== */ +.fadeIn { + opacity: 1; + animation-name: fadeIn; + animation-duration: 1s; + animation-fill-mode: both; +} + +/* ===== Light theme overrides ===== */ +html[data-theme="light"] { + --lp-c-bg: #ffffff; + --lp-c-bg-alt: #f9fbfc; + --lp-c-bg-elv: #ffffff; + --lp-c-bg-soft: #f6f8f9; + --lp-c-border: #e7ecef; + --lp-c-text-1: #0a0f12; + --lp-c-text-2: #1b242a; + --lp-c-text-3: #5a6a73; + --c-text-dim: #5a6a73; + --lp-c-divider: #e2e2de; + --c-accent: #519c8c; + --lp-button-brand-text: #ffffff; + --lp-button-brand-hover-bg: #32bcac; + --lp-button-alt-bg: #eef3f6; + --lp-button-alt-text: var(--lp-c-text-1); + --lp-button-alt-hover-bg: #e6edf3; + --nav-border-scrolled: rgba(0, 0, 0, 0.16); + --lp-shadow-soft: 0 2px 6px rgba(0, 0, 0, 0.08); + --lp-shadow-strong: 0 4px 12px rgba(0, 0, 0, 0.15); +} + +html[data-theme="light"] .lp-home { + background-color: var(--ifm-background-color); +} + +html[data-theme="light"] .lp-home .heading .name { + color: var(--lp-c-text-1); +} + +html[data-theme="light"] .lp-home .tagline { + color: var(--lp-c-text-2) !important; +} + +html[data-theme="light"] .lp-home a { + text-decoration-color: color-mix(in oklab, var(--c-accent) 50%, transparent); +} + +html[data-theme="light"] .lp-home a:hover { + text-decoration-color: var(--c-accent); +} + +html[data-theme="light"] .lp-home .lp-features { + color: var(--lp-c-text-2); +} + +html[data-theme="light"] .lp-home .lp-feature-card h3 { + color: var(--lp-c-text-1) !important; +} + +html[data-theme="light"] .lp-features .lp-feature-card:hover { + background: #eef3f6; +} + +html[data-theme="light"] .lp-home .image-bg { + opacity: 0.3; +} + +/* Light theme: override for button colors */ +html[data-theme="light"] .lp-home .action.alt a[href*="/users/"], +html[data-theme="light"] .lp-home .action a[href*="/users/"] { + background: var(--role-enduser-primary) !important; + color: white !important; +} + +html[data-theme="light"] .lp-home .action.alt a[href*="/users/"]:hover, +html[data-theme="light"] .lp-home .action a[href*="/users/"]:hover { + background: var(--role-enduser-secondary) !important; + color: white !important; +} + +html[data-theme="light"] .lp-home .action.alt a[href*="/operators/"], +html[data-theme="light"] .lp-home .action a[href*="/operators/"] { + background: var(--role-operator-primary) !important; + color: white !important; +} + +html[data-theme="light"] .lp-home .action.alt a[href*="/operators/"]:hover, +html[data-theme="light"] .lp-home .action a[href*="/operators/"]:hover { + background: var(--role-operator-secondary) !important; + color: white !important; +} + +html[data-theme="light"] .lp-home .action.alt a[href*="/developers/"], +html[data-theme="light"] .lp-home .action a[href*="/developers/"] { + background: transparent !important; + color: var(--teal-11) !important; + border: none !important; +} + +html[data-theme="light"] .lp-home .action.alt a[href*="/developers/"]:hover, +html[data-theme="light"] .lp-home .action a[href*="/developers/"]:hover { + background: rgba(1, 41, 49, 0.1) !important; + color: var(--teal-10) !important; +} + +html[data-theme="light"] .lp-home .actions .action a { + color: #ffffff !important; +} + +html[data-theme="light"] .lp-home .actions .action:hover { + transform: translateY(-2px); +} + +html[data-theme="light"] .lp-home .actions .action:active { + transform: translateY(0); + box-shadow: none; +} + +html[data-theme="light"] .open-source-section { + color: var(--lp-c-text-2); +} + +html[data-theme="light"] .get-started-section { + background-color: var(--ifm-background-color); +} + +/* ===== Dark mode: open-source section ===== */ +html[data-theme="dark"] .open-source-section { + color: var(--lp-c-text-1); +} + +html[data-theme="dark"] .open-source-section p, +html[data-theme="dark"] .open-source-section p b { + color: var(--lp-c-text-1) !important; +} + +html[data-theme="dark"] .open-source-section a { + color: var(--ifm-color-primary) !important; +} + +html[data-theme="dark"] .open-source-section a:hover { + color: var(--ifm-color-primary-light) !important; +} + +html[data-theme="dark"] .typing-open-source { + color: var(--lp-c-green-3); + border-right-color: var(--lp-c-green-3); +} + +/* ===== Dark mode: feature cards ===== */ +html[data-theme="dark"] .lp-features .lp-feature-card { + background-color: var(--ifm-background-surface-color); + border-color: var(--ifm-background-surface-color); +} + +/* ===== Dark mode: get-started section ===== */ +html[data-theme="dark"] .get-started-section { + color: var(--lp-c-text-1); +} + +html[data-theme="dark"] .get-started-section span { + color: var(--lp-c-text-2) !important; +} + +html[data-theme="dark"] .get-started-section a:not(.button) { + color: var(--ifm-color-primary) !important; +} + +/* Dark mode: ensure role button colors are visible */ +html[data-theme="dark"] .lp-home .action.alt a[href*="/users/"], +html[data-theme="dark"] .lp-home .action a[href*="/users/"] { + background: var(--role-enduser-primary) !important; + color: white !important; +} + +html[data-theme="dark"] .lp-home .action.alt a[href*="/users/"]:hover, +html[data-theme="dark"] .lp-home .action a[href*="/users/"]:hover { + background: var(--role-enduser-secondary) !important; + color: white !important; +} + +html[data-theme="dark"] .lp-home .action.alt a[href*="/operators/"], +html[data-theme="dark"] .lp-home .action a[href*="/operators/"] { + background: var(--role-operator-primary) !important; + color: white !important; +} + +html[data-theme="dark"] .lp-home .action.alt a[href*="/operators/"]:hover, +html[data-theme="dark"] .lp-home .action a[href*="/operators/"]:hover { + background: var(--role-operator-secondary) !important; + color: white !important; +} + +html[data-theme="dark"] .lp-home .action.alt a[href*="/developers/"], +html[data-theme="dark"] .lp-home .action a[href*="/developers/"] { + background: transparent !important; + color: white !important; + border: none !important; +} + +html[data-theme="dark"] .lp-home .action.alt a[href*="/developers/"]:hover, +html[data-theme="dark"] .lp-home .action a[href*="/developers/"]:hover { + background: rgba(1, 41, 49, 0.3) !important; + color: white !important; +} + +/* ===== Dark mode: footer ===== */ +html[data-theme="dark"] .footer--dark { + background-color: #161618; + border-top-color: #2e2e32; +} + +html[data-theme="dark"] .footer--dark .footer__title { + color: #dfdfd6; +} + +html[data-theme="dark"] .footer--dark .footer__col { + color: #98989f; +} + +html[data-theme="dark"] .footer--dark .footer__item a, +html[data-theme="dark"] .footer--dark .footer__link-item { + color: #98989f !important; +} + +html[data-theme="dark"] .footer--dark .footer__item a:hover, +html[data-theme="dark"] .footer--dark .footer__link-item:hover { + color: var(--ifm-color-primary) !important; + text-decoration: none; +} + +html[data-theme="dark"] .footer--dark .footer__copyright { + color: #6a6a71; +} + +/* ===== Light mode: footer ===== */ +html[data-theme="light"] .footer--dark { + background-color: #f6f8f9; + border-top-color: #e7ecef; +} + +html[data-theme="light"] .footer--dark .footer__title { + color: #0a0f12; +} + +html[data-theme="light"] .footer--dark .footer__item a, +html[data-theme="light"] .footer--dark .footer__link-item { + color: #1b242a !important; +} + +html[data-theme="light"] .footer--dark .footer__item a:hover, +html[data-theme="light"] .footer--dark .footer__link-item:hover { + color: var(--ifm-color-primary) !important; + text-decoration: none; +} + +html[data-theme="light"] .footer--dark .footer__copyright { + color: #5a6a73; +} + +/* ===== Footer base ===== */ +.footer--dark { + --ifm-footer-background-color: var(--lp-c-bg); + background-color: var(--lp-c-bg); + border-top: 1px solid var(--lp-c-divider); + padding: 32px 0 16px 0; + margin-top: 0; +} + +.footer--dark .footer__copyright { + font-size: 12px; + line-height: 24px; + text-align: center; + padding-top: 16px; + margin-top: 16px; + border-top: 1px solid var(--lp-c-divider); + width: 100%; +} + +.footer--dark svg { + display: none; +} + +/* ===== "See our projects" button in open-source-section ===== */ +.open-source-section .button--primary { + margin-top: 16px; + background-color: var(--ifm-color-primary); + border-color: var(--ifm-color-primary); + color: #ffffff !important; + font-weight: 600; +} + +.open-source-section .button--primary:hover { + background-color: var(--ifm-color-primary-dark); + border-color: var(--ifm-color-primary-dark); + color: #ffffff !important; +} + +html[data-theme="dark"] .open-source-section .button--primary { + background-color: var(--lp-c-green-3); + border-color: var(--lp-c-green-3); + color: #000000 !important; +} + +html[data-theme="dark"] .open-source-section .button--primary:hover { + background-color: var(--lp-c-green-2); + border-color: var(--lp-c-green-2); + color: #000000 !important; +} + +/* ===== Global link colors in light mode ===== */ +html[data-theme="light"] { + --ifm-link-color: var(--c-accent); + --ifm-link-hover-color: var(--c-accent); +} + +html[data-theme="light"] #__docusaurus :where(.markdown, .theme-doc-markdown, .container, article) a:hover { + text-decoration-color: var(--c-accent); +} + +/* ===== Responsive: inner-source on small screens ===== */ +@media only screen and (max-width: 1024px) { + .typing-open-source { + width: 17ch; + font-size: 2em; + } +} + +/* ===== Custom Footer (ocp-footer) ===== */ +.ocp-footer { + background-color: var(--ifm-background-color); + border-top: 1px solid var(--ifm-color-emphasis-200); + font-size: 13px; + line-height: 1.6; + color: var(--ifm-font-color-base); +} + +[data-theme='dark'] .ocp-footer { + border-top-color: #2e2e32; + color: #98989f; +} + +/* EU Banner row */ +.ocp-footer__eu-banner { + background-color: var(--ifm-background-surface-color); + border-bottom: 1px solid var(--ifm-color-emphasis-200); + padding: 20px 0; +} + +[data-theme='dark'] .ocp-footer__eu-banner { + border-bottom-color: #2e2e32; +} + +.ocp-footer__eu-container { + max-width: 1200px; + margin: 0 auto; + padding: 0 24px; + display: flex; + align-items: center; + gap: 24px; +} + +.ocp-footer__eu-logos { + flex-shrink: 0; +} + +.ocp-footer__eu-logos img { + max-height: none; + max-width: 300px; + width: 100%; + display: block; +} + +.ocp-footer__eu-text { + flex: 1; + font-size: 11px; + color: #5a6a73; +} + +[data-theme='dark'] .ocp-footer__eu-text { + color: #6a6a71; +} + +.ocp-footer__eu-text p { + margin: 0; +} + +.ocp-footer__eu-text p + p { + margin-top: 4px; +} + +.ocp-footer__neonephos { + flex-shrink: 0; + font-size: 15px; +} + +.ocp-footer__neonephos a { + color: #049F9A; + text-decoration: none; + font-size: 15px; +} + +.ocp-footer__neonephos a:hover { + color: #07838F; + text-decoration: underline; +} + +[data-theme='dark'] .ocp-footer__neonephos a { + color: #2CE0BF; +} + +[data-theme='dark'] .ocp-footer__neonephos a:hover { + color: #3fdec0; +} + +[data-theme='dark'] .ocp-footer__neonephos img { + filter: brightness(0) invert(1); +} + +/* Copyright row */ +.ocp-footer__copyright-row { + border-bottom: 1px solid var(--ifm-color-emphasis-200); + padding: 16px 0; +} + +[data-theme='dark'] .ocp-footer__copyright-row { + border-bottom-color: #2e2e32; +} + +.ocp-footer__inner { + max-width: 1200px; + margin: 0 auto; + padding: 0 24px; +} + +.ocp-footer__copyright-row p { + margin: 0; + font-size: 12px; + color: #5a6a73; +} + +[data-theme='dark'] .ocp-footer__copyright-row p { + color: #98989f; +} + +.ocp-footer__copyright-row a { + color: #049F9A; + text-decoration: none; +} + +.ocp-footer__copyright-row a:hover { + text-decoration: underline; + color: #07838F; +} + +[data-theme='dark'] .ocp-footer__copyright-row a { + color: #2CE0BF; +} + +/* Legal links row */ +.ocp-footer__legal-row { + padding: 12px 0; +} + +.ocp-footer__legal-links { + display: flex; + align-items: center; + gap: 10px; + flex-wrap: wrap; +} + +.ocp-footer__legal-links a { + color: #5a6a73; + text-decoration: none; + font-size: 12px; +} + +.ocp-footer__legal-links a:hover { + color: #049F9A; + text-decoration: underline; +} + +[data-theme='dark'] .ocp-footer__legal-links a { + color: #98989f; +} + +[data-theme='dark'] .ocp-footer__legal-links a:hover { + color: #3fdec0; +} + +.ocp-footer__legal-sep { + color: #c0cdd4; + font-size: 12px; + user-select: none; +} + +[data-theme='dark'] .ocp-footer__legal-sep { + color: #6a6a71; +} + +/* Responsive footer */ +@media (max-width: 768px) { + .ocp-footer__eu-container { + flex-direction: column; + align-items: flex-start; + gap: 16px; + } + + .ocp-footer__eu-logos { + align-self: center; + } +} + +/* ===== Ecosystem Page Styling ===== */ + +.ecosystem-grid { + display: grid; + grid-template-columns: repeat(2, 1fr); + gap: 24px; + margin: 32px 0; + max-width: 900px; +} + +.project-card { + background: var(--ifm-card-background-color); + border: 1px solid var(--ifm-color-emphasis-200); + border-radius: 16px; + padding: 28px; + transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1); + display: flex; + flex-direction: column; + height: 100%; + position: relative; + overflow: hidden; +} + +/* Subtle gradient overlay on card */ +.project-card::before { + content: ''; + position: absolute; + top: 0; + left: 0; + right: 0; + height: 4px; + background: linear-gradient(90deg, var(--teal-6) 0%, var(--teal-4) 100%); + opacity: 0; + transition: opacity 0.3s ease; +} + +.project-card:hover::before { + opacity: 1; +} + +.project-card:hover { + transform: translateY(-6px); + box-shadow: 0 12px 28px rgba(4, 159, 154, 0.15), 0 4px 12px rgba(4, 159, 154, 0.08); + border-color: var(--teal-4); +} + +[data-theme='dark'] .project-card { + background: rgba(255, 255, 255, 0.02); + border-color: rgba(255, 255, 255, 0.08); +} + +[data-theme='dark'] .project-card:hover { + background: rgba(255, 255, 255, 0.04); + border-color: var(--teal-6); + box-shadow: 0 12px 28px rgba(44, 224, 191, 0.15), 0 4px 12px rgba(44, 224, 191, 0.08); +} + +.project-card-header { + display: flex; + align-items: center; + gap: 18px; + margin-bottom: 20px; + padding-bottom: 16px; + border-bottom: 1px solid var(--ifm-color-emphasis-100); +} + +[data-theme='dark'] .project-card-header { + border-bottom-color: rgba(255, 255, 255, 0.08); +} + +.project-logo, +img.project-logo, +.project-card-header .project-logo, +.project-card-header img.project-logo, +.project-card-header > img.project-logo, +.ecosystem-grid .project-logo, +.ecosystem-grid img.project-logo, +img[alt="Kubernetes"], +img[alt="Crossplane"], +img[alt="Gardener"], +img[alt="Flux"], +img[alt="Kyverno"], +img[alt="External Secrets"], +img[alt="Open Component Model"], +img[alt="Landscaper"] { + width: 48px !important; + height: 48px !important; + min-width: 48px !important; + min-height: 48px !important; + max-width: 48px !important; + max-height: 48px !important; + object-fit: contain !important; + flex-shrink: 0 !important; + display: block !important; +} + +.project-card-header h3 { + margin: 0; + font-size: 1.35rem; + font-weight: 700; + background: linear-gradient(135deg, var(--teal-7) 0%, var(--teal-10) 100%); + -webkit-background-clip: text; + -webkit-text-fill-color: transparent; + background-clip: text; +} + +[data-theme='dark'] .project-card-header h3 { + background: linear-gradient(135deg, var(--teal-4) 0%, var(--teal-6) 100%); + -webkit-background-clip: text; + -webkit-text-fill-color: transparent; + background-clip: text; +} + +.project-description { + font-size: 0.975rem; + line-height: 1.65; + margin-bottom: 24px; + flex-grow: 1; + color: var(--ifm-color-emphasis-800); +} + +[data-theme='dark'] .project-description { + color: var(--ifm-color-emphasis-600); +} + +.project-links { + display: flex; + flex-direction: column; + gap: 12px; + margin-top: auto; +} + +.project-link { + padding: 11px 20px; + border-radius: 10px; + text-align: center; + font-weight: 600; + font-size: 0.9rem; + text-decoration: none; + transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1); + display: inline-flex; + align-items: center; + justify-content: center; + gap: 8px; + line-height: 1; + vertical-align: middle; + white-space: nowrap; + position: relative; + overflow: hidden; + width: 100%; +} + +.project-link svg { + transition: transform 0.3s ease; +} + +.project-link:hover svg { + transform: scale(1.1); +} + +.project-link-primary { + background: linear-gradient(135deg, var(--teal-6) 0%, var(--teal-7) 100%); + color: white; + box-shadow: 0 3px 10px rgba(4, 159, 154, 0.25); + border: 1px solid transparent; +} + +.project-link-primary:hover { + background: linear-gradient(135deg, var(--teal-7) 0%, var(--teal-10) 100%); + color: white; + box-shadow: 0 6px 16px rgba(4, 159, 154, 0.35); + transform: translateY(-2px); +} + +.project-link-primary:active { + transform: translateY(0); + box-shadow: 0 2px 6px rgba(4, 159, 154, 0.3); +} + +.project-link-secondary { + background: var(--ifm-color-emphasis-0); + color: var(--teal-7); + border: 2px solid var(--teal-6); + box-shadow: 0 2px 6px rgba(4, 159, 154, 0.1); +} + +.project-link-secondary:hover { + background: var(--teal-6); + color: white; + border-color: var(--teal-6); + box-shadow: 0 6px 16px rgba(4, 159, 154, 0.25); + transform: translateY(-2px); +} + +.project-link-secondary:active { + transform: translateY(0); + box-shadow: 0 2px 6px rgba(4, 159, 154, 0.2); +} + +[data-theme='dark'] .project-link-primary { + background: linear-gradient(135deg, var(--teal-6) 0%, var(--teal-7) 100%); + box-shadow: 0 3px 10px rgba(44, 224, 191, 0.2); +} + +[data-theme='dark'] .project-link-primary:hover { + background: linear-gradient(135deg, var(--teal-4) 0%, var(--teal-6) 100%); + box-shadow: 0 6px 16px rgba(44, 224, 191, 0.3); +} + +[data-theme='dark'] .project-link-secondary { + background: rgba(255, 255, 255, 0.03); + color: var(--teal-4); + border-color: var(--teal-6); +} + +[data-theme='dark'] .project-link-secondary:hover { + background: var(--teal-6); + color: var(--ifm-color-emphasis-0); +} + +@media (max-width: 768px) { + .ecosystem-grid { + grid-template-columns: 1fr; + } +} + +/* ===== Role-based Page Header Accents ===== */ +/* Add subtle top border to main content based on current section */ +article[data-role="enduser"] { + border-top: 4px solid var(--role-enduser-primary); + padding-top: 1rem; +} + +/* ===== CRITICAL: Force project logo sizes ===== */ +/* This must be at the end to override Docusaurus defaults */ +.ecosystem-grid .project-card-header img.project-logo, +.ecosystem-grid img[class*="project-logo"], +.project-card-header img[alt="Kubernetes"], +.project-card-header img[alt="Crossplane"], +.project-card-header img[alt="Gardener"], +.project-card-header img[alt="Flux"], +.project-card-header img[alt="Kyverno"], +.project-card-header img[alt="External Secrets"], +.project-card-header img[alt="Open Component Model"], +.project-card-header img[alt="Landscaper"] { + width: 64px !important; + height: 64px !important; + min-width: 64px !important; + min-height: 64px !important; + max-width: 64px !important; + max-height: 64px !important; + object-fit: contain !important; + flex-shrink: 0 !important; + display: block !important; +} + +/* ===== Axolotl Hidden - Control Planes Only ===== */ +.image-src { + opacity: 0; + pointer-events: none; + display: none; +} + +/* ===== Flying Control Planes ===== */ +.flying-cp { + position: absolute; + width: 180px; + height: 180px; + object-fit: contain; + z-index: 2; + opacity: 0; + filter: drop-shadow(0 2px 8px rgba(4, 159, 154, 0.3)); + transform: translateY(100vh); + transition: transform 0.8s cubic-bezier(0.34, 1.56, 0.64, 1), opacity 0.8s ease-out; +} + +.flying-cp.visible { + opacity: 0.85; + transform: translateY(0); +} + +[data-theme='dark'] .flying-cp { + filter: drop-shadow(0 2px 12px rgba(44, 224, 191, 0.4)); +} + +@keyframes float-cp-1 { + 0%, 100% { + transform: translate(0, 0); + } + 50% { + transform: translate(8px, -10px); + } +} + +@keyframes float-cp-2 { + 0%, 100% { + transform: translate(0, 0); + } + 50% { + transform: translate(-10px, -8px); + } +} + +@keyframes float-cp-3 { + 0%, 100% { + transform: translate(0, 0); + } + 50% { + transform: translate(10px, -12px); + } +} + +.flying-cp-1 { + bottom: -8%; + left: 1%; + transition-delay: 0s; +} + +.flying-cp-1.visible { + animation: float-cp-1 5s ease-in-out 0.8s infinite; +} + +.flying-cp-2 { + bottom: 30%; + left: 35%; + transform: translateY(100vh); + transition-delay: 0.2s; +} + +.flying-cp-2.visible { + transform: translateY(0); + animation: float-cp-2 5.5s ease-in-out 1s infinite; +} + +.flying-cp-3 { + bottom: -5%; + right: 1%; + transition-delay: 0.4s; +} + +.flying-cp-3.visible { + animation: float-cp-3 5.2s ease-in-out 1.2s infinite; +} + +/* ===== Cloud Projection - Cutting Edge Animation ===== */ +.cp-cloud-projection { + position: absolute; + width: 303.6px; + height: 212.52px; + z-index: 10; + opacity: 0; + pointer-events: none; + transform: translateY(50px); + transition: opacity 0.8s ease-out, transform 0.8s cubic-bezier(0.34, 1.56, 0.64, 1); +} + +.cp-cloud-projection.visible { + opacity: 1; + transform: translateY(0); +} + +.cp-cloud-1 { + bottom: 17%; + left: -10%; + transition-delay: 0.8s; +} + +.cp-cloud-1.visible { + animation: cloud-float-gentle 4s ease-in-out 1.6s infinite; +} + +.cp-cloud-2-1 { + bottom: 52%; + left: 12%; + transition-delay: 1s; +} + +.cp-cloud-2-1.visible { + animation: cloud-float-gentle 4.2s ease-in-out 1.8s infinite; +} + +.cp-cloud-2-2 { + bottom: 52%; + left: 30%; + transition-delay: 1.2s; +} + +.cp-cloud-2-2.visible { + animation: cloud-float-gentle 4.4s ease-in-out 2s infinite; +} + +.cp-cloud-3 { + bottom: 20%; + right: -15%; + transition-delay: 1.4s; +} + +.cp-cloud-3.visible { + animation: cloud-float-gentle 4.1s ease-in-out 2.2s infinite; +} + +@keyframes cloud-float-gentle { + 0%, 100% { + transform: translateY(0); + } + 50% { + transform: translateY(-8px); + } +} + +.cp-cloud-projection { + animation: cloud-projection-appear 1.2s cubic-bezier(0.34, 1.56, 0.64, 1) 1s forwards, + cloud-float-gentle 4s ease-in-out 2.2s infinite; +} + +/* Connection line pulse */ +.cloud-connection { + animation: connection-pulse 3s ease-in-out infinite; +} + +@keyframes connection-pulse { + 0%, 100% { + opacity: 0.3; + stroke-width: 1; + } + 50% { + opacity: 0.6; + stroke-width: 1.5; + } +} + +/* Main hexagon breathing */ +.cloud-hex-1 { + transform-origin: center; + animation: hex-breathe 4s ease-in-out infinite; +} + +@keyframes hex-breathe { + 0%, 100% { + transform: scale(1); + opacity: 1; + } + 50% { + transform: scale(1.05); + opacity: 0.9; + } +} + +/* Center dot pulse */ +.cloud-dot-1 { + animation: dot-pulse 2s ease-in-out infinite; +} + +@keyframes dot-pulse { + 0%, 100% { + opacity: 0.4; + r: 2; + } + 50% { + opacity: 0.8; + r: 2.5; + } +} + +/* Side dots subtle fade */ +.cloud-dot-2, .cloud-dot-3 { + animation: dot-fade 3s ease-in-out infinite; +} + +@keyframes dot-fade { + 0%, 100% { + opacity: 0.3; + } + 50% { + opacity: 0.6; + } +} + +/* Sonar sweep animation */ +.sonar-sweep { + animation: sonar-rotate 8s linear infinite; + transform-origin: 60px 25px; +} + +@keyframes sonar-rotate { + from { + transform: rotate(0deg); + } + to { + transform: rotate(360deg); + } +} + +/* Resource icons sparkle animations */ +.cloud-resource-icon { + animation: icon-sparkle 3s ease-in-out infinite; +} + +.icon-user { + animation-delay: 0s; +} + +.icon-database { + animation-delay: 0.4s; +} + +.icon-key { + animation-delay: 0.8s; +} + +.icon-cpu { + animation-delay: 1.2s; +} + +.icon-docker { + animation-delay: 1.6s; +} + +.icon-server { + animation-delay: 2s; +} + +.icon-network { + animation-delay: 2.4s; +} + +.icon-harddrive { + animation-delay: 0.6s; +} + +.icon-settings { + animation-delay: 1s; +} + +.icon-shield { + animation-delay: 1.4s; +} + +.icon-lock { + animation-delay: 1.8s; +} + +.icon-memory { + animation-delay: 2.2s; +} + +.icon-globe { + animation-delay: 2.6s; +} + +@keyframes icon-sparkle { + 0%, 100% { + opacity: 0.4; + } + 50% { + opacity: 0.85; + } +} + +/* Dark mode adjustments */ +[data-theme='dark'] .cloud-connection { + stroke: rgba(44, 224, 191, 0.3); +} + +[data-theme='dark'] .cloud-hex-1 { + stroke: rgba(44, 224, 191, 0.4); +} + +[data-theme='dark'] .cloud-dot-1, +[data-theme='dark'] .cloud-particle-1, +[data-theme='dark'] .cloud-particle-2, +[data-theme='dark'] .cloud-particle-3 { + fill: rgba(44, 224, 191, 0.6); +} + +[data-theme='dark'] .cloud-badge text { + fill: #ffffff; +} + +[data-theme='dark'] .sonar-sweep circle { + fill: rgba(44, 224, 191, 0.7); +} + +[data-theme='dark'] .sonar-sweep circle:nth-child(2) { + fill: rgba(44, 224, 191, 0.5); +} + +[data-theme='dark'] .sonar-sweep circle:nth-child(3) { + fill: rgba(44, 224, 191, 0.4); +} + +[data-theme='dark'] .sonar-sweep circle:nth-child(4) { + fill: rgba(44, 224, 191, 0.3); +} + +[data-theme='dark'] .sonar-sweep circle:nth-child(5) { + fill: rgba(44, 224, 191, 0.2); +} + +[data-theme='dark'] .cloud-dot-2, +[data-theme='dark'] .cloud-dot-3 { + fill: rgba(194, 252, 238, 0.4); +} + +/* Responsive */ +@media (max-width: 996px) { + .cp-cloud-projection { + width: 227.7px; + height: 159.39px; + } +} + +@media (max-width: 768px) { + .cp-cloud-projection { + width: 242.88px; + height: 170.016px; + } + + .flying-cp { + width: 150px; + height: 150px; + } + + /* Axolotl fades out on mobile too */ + .image-src { + opacity: 1; + transition: opacity 0.6s ease-out; + } + + .image-src.scrolled { + opacity: 0; + } + + /* Hide CP3 and its cloud on mobile */ + .flying-cp-3, + .cp-cloud-3 { + display: none !important; + } + + /* CP1 centered below Cloud 1 */ + .flying-cp-1 { + bottom: -4%; + left: 15%; + } + + /* CP2 centered below the two clouds */ + .flying-cp-2 { + bottom: 20%; + right: 15%; + left: auto; + } + + .flying-cp-2.visible { + transform: translateY(0); + } + + /* Cloud 1 above CP1 */ + .cp-cloud-1 { + bottom: 18%; + left: 8%; + } + + /* Cloud 2-1 (left cloud above CP2) */ + .cp-cloud-2-1 { + bottom: 44%; + right: 12%; + left: auto; + } + + /* Cloud 2-2 (right cloud above CP2) */ + .cp-cloud-2-2 { + bottom: 44%; + right: -2%; + left: auto; + } +} + +/* ===== Cloud Provider Badges ===== */ +.cloud-providers { + margin-top: 16px; + padding-top: 16px; +} + +.cloud-providers-label { + font-weight: 600; + margin-bottom: 10px; + color: var(--ifm-color-emphasis-800); + font-size: 0.85rem; +} + +.cloud-providers-list { + display: flex; + gap: 8px; + flex-wrap: wrap; +} + +.provider-badge { + display: inline-flex; + align-items: center; + justify-content: center; + gap: 6px; + padding: 8px 16px; + background: var(--ifm-color-emphasis-100); + border: 1px solid var(--ifm-color-emphasis-200); + border-radius: 6px; + color: var(--ifm-color-emphasis-800); + text-decoration: none; + font-size: 0.85rem; + font-weight: 500; + transition: all 0.2s ease; + white-space: nowrap; + line-height: 1; + height: auto; + width: auto; + min-height: 0; + min-width: 0; + flex: 0 0 auto; +} + +.provider-badge:hover { + background: var(--teal-2); + border-color: var(--teal-6); + color: var(--teal-10); + transform: translateY(-1px); + box-shadow: 0 2px 6px rgba(4, 159, 154, 0.15); + text-decoration: none; +} diff --git a/src/pages/about/legal-disclosure.md b/src/pages/about/legal-disclosure.md new file mode 100644 index 0000000..8fbf009 --- /dev/null +++ b/src/pages/about/legal-disclosure.md @@ -0,0 +1,28 @@ +--- +title: Legal Disclosure +description: Legal Disclosure / Impressum for Open Control Plane +--- + +# Legal Disclosure / Impressum + +SAP Deutschland SE & Co. KG + +**Hauptsitz / Registered Office:** +SAP Deutschland SE & Co. KG +Hasso-Plattner-Ring 7 +69190 Walldorf + +Telefon: +49/6227/7-47474 +Telefax: +49/6227/7-57575 +info.germany@sap.com + +Sitz der Gesellschaft/Registered Office: Walldorf, Germany +Registergericht/Commercial Register Mannheim HRA 350654 + +**Persönlich haftende Gesellschafterin/General Partner:** SAP SE + +**Vorstand/Executive Board:** Christian Klein (CEO), Muhammad Alam, Dominik Asam, Thomas Saueressig, Sebastian Steinhäuser, Gina Vargiu-Breuer + +**Vorsitzender des Aufsichtsrats/Chairperson of the Supervisory Board:** Pekka Ala-Pietilä + +Registergericht/Commercial Register Mannheim HRB 719915 diff --git a/src/pages/about/privacy.md b/src/pages/about/privacy.md new file mode 100644 index 0000000..ec0eb80 --- /dev/null +++ b/src/pages/about/privacy.md @@ -0,0 +1,66 @@ +--- +title: Privacy Statement +description: Privacy Statement for Open Control Plane +--- + +# Privacy Statement + +## Controller + +SAP Deutschland SE & Co. KG +Hasso-Plattner-Ring 7 +69190 Walldorf, Germany + +Email: [privacy@sap.com](mailto:privacy@sap.com) + +SAP Deutschland SE & Co. KG ("SAP", "we", "us") is the controller responsible for processing your personal data when you visit the Open Control Plane documentation website at [openmcp-project.github.io/docs](https://openmcp-project.github.io/docs). + +## Data We Collect + +### Log Files + +When you visit our website, the web server automatically records log files that may contain the following information: + +- IP address (anonymized) +- Date and time of access +- Requested URL and referrer URL +- Browser type and version +- Operating system + +These log files are stored for a maximum of **7 days** and are used solely for ensuring the security and stability of the website. The legal basis for this processing is our legitimate interest in providing a secure website (Art. 6(1)(f) GDPR). + +### Cookies + +This website uses only **technically necessary cookies** that are required for the website to function properly. No tracking cookies or analytics cookies are used. These cookies do not require your consent under applicable law. + +### GitHub Issues and Contributions + +If you submit issues, pull requests, or other contributions through GitHub, GitHub's own privacy policy applies to the data you provide. We process your GitHub username and contribution content to maintain and improve the Open Control Plane project. + +## Your Rights + +Under the General Data Protection Regulation (GDPR), you have the following rights: + +- **Right of access** (Art. 15 GDPR) — You may request information about your personal data that we process. +- **Right to rectification** (Art. 16 GDPR) — You may request correction of inaccurate personal data. +- **Right to erasure** (Art. 17 GDPR) — You may request deletion of your personal data, subject to legal retention obligations. +- **Right to restriction of processing** (Art. 18 GDPR) — You may request that we restrict the processing of your personal data under certain conditions. +- **Right to data portability** (Art. 20 GDPR) — You may request to receive your personal data in a structured, commonly used, and machine-readable format. +- **Right to object** (Art. 21 GDPR) — You may object to the processing of your personal data based on legitimate interests at any time. + +You also have the right to lodge a complaint with a supervisory authority, in particular in the EU Member State of your habitual residence, place of work, or place of the alleged infringement. + +## Contact + +For questions or requests regarding data protection, please contact: + +**SAP Data Protection Officer** +Email: [privacy@sap.com](mailto:privacy@sap.com) + +SAP Deutschland SE & Co. KG +Hasso-Plattner-Ring 7 +69190 Walldorf, Germany + +## Changes to This Privacy Statement + +We may update this privacy statement from time to time. Any changes will be posted on this page. We encourage you to review this statement periodically. diff --git a/src/pages/about/terms-of-use.md b/src/pages/about/terms-of-use.md new file mode 100644 index 0000000..b85183f --- /dev/null +++ b/src/pages/about/terms-of-use.md @@ -0,0 +1,46 @@ +--- +title: Terms of Use +description: Terms of Use for Open Control Plane +--- + +# Terms of Use + +## Introduction + +This website ([openmcp-project.github.io/docs](https://openmcp-project.github.io/docs)) is maintained by Linux Foundation Europe as part of the Open Control Plane project. By accessing or using this website, you agree to be bound by these Terms of Use. If you do not agree to these terms, please do not use this website. + +## Trademarks + +Open Control Plane and its associated logos and trademarks are trademarks of Linux Foundation Europe. Use of these trademarks must comply with the Linux Foundation Europe [Trademark Usage Guidelines](https://linuxfoundation.eu/policies). You may not use these trademarks without prior written permission, except as permitted by applicable law. + +All other trademarks, service marks, and trade names referenced on this site are the property of their respective owners. + +## Copyrights and Licenses + +### Content + +Except where otherwise noted, content on this site is licensed under the [Creative Commons Attribution 4.0 International License (CC-BY 4.0)](https://creativecommons.org/licenses/by/4.0/). You are free to share and adapt the content, provided you give appropriate credit, provide a link to the license, and indicate if changes were made. + +### Software + +Software source code published by the Open Control Plane project is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0), unless otherwise noted in the respective repository. You may obtain a copy of the license at the link above. + +## Disclaimer of Warranties + +This website and its content are provided on an "AS IS" and "AS AVAILABLE" basis, without warranties of any kind, either express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, non-infringement, or accuracy. Linux Foundation Europe does not warrant that the website will be uninterrupted, error-free, or free of harmful components. + +## Limitation of Liability + +To the fullest extent permitted by applicable law, Linux Foundation Europe, its directors, officers, employees, and agents shall not be liable for any indirect, incidental, special, consequential, or punitive damages, or any loss of profits or revenue, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from your access to or use of (or inability to access or use) this website or its content. + +## Privacy + +Your use of this website is also subject to our [Privacy Statement](/about/privacy). Please review it to understand how we collect and use information. + +## General + +These Terms of Use are governed by the laws of Belgium, without regard to conflict of law principles. Linux Foundation Europe reserves the right to modify these terms at any time. Changes will be posted on this page with an updated effective date. Your continued use of the website after such changes constitutes acceptance of the revised terms. + +If any provision of these Terms of Use is found to be unenforceable, the remaining provisions will continue in full force and effect. + +If you have questions about these terms, please contact the Open Control Plane project through [GitHub](https://github.com/openmcp-project). diff --git a/src/pages/index.js b/src/pages/index.js new file mode 100644 index 0000000..64082f6 --- /dev/null +++ b/src/pages/index.js @@ -0,0 +1,547 @@ +/* eslint-disable import/no-unresolved */ +import React, { useEffect, useRef } from "react"; +import clsx from "clsx"; +import Layout from "@theme/Layout"; +import ThemedImage from "@theme/ThemedImage"; +import useDocusaurusContext from "@docusaurus/useDocusaurusContext"; +import Link from "@docusaurus/Link"; +import useBaseUrl from "@docusaurus/useBaseUrl"; + +export default function Home() { + const { siteConfig } = useDocusaurusContext(); + + useEffect(() => { + const innersourceText = document.getElementsByClassName("typing-open-source")[0]; + const onScroll = () => { + if (innersourceText && isInViewport(innersourceText)) { + innersourceText.classList.add("animate"); + } + }; + window.addEventListener("scroll", onScroll, { passive: true }); + return () => window.removeEventListener("scroll", onScroll); + }, []); + + useEffect(() => { + const navbar = document.querySelector(".navbar"); + const axolotl = document.querySelector(".image-src"); + const controlPlanes = document.querySelectorAll(".flying-cp"); + const clouds = document.querySelectorAll(".cp-cloud-projection"); + + // Trigger animation immediately + axolotl?.classList.add("scrolled"); + controlPlanes.forEach((cp) => cp.classList.add("visible")); + clouds.forEach((cloud) => cloud.classList.add("visible")); + + const handleScroll = () => { + if (window.scrollY < 10) { + navbar?.classList.add("navbar--transparent"); + } else { + navbar?.classList.remove("navbar--transparent"); + } + }; + + window.addEventListener("scroll", handleScroll, { passive: true }); + handleScroll(); + return () => { + window.removeEventListener("scroll", handleScroll); + }; + }, []); + + function isInViewport(element) { + const rect = element.getBoundingClientRect(); + return ( + rect.top >= 0 && + rect.left >= 0 && + rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) && + rect.right <= (window.innerWidth || document.documentElement.clientWidth) + ); + } + + function FeatureCard({ children }) { + const cardRef = useRef(null); + const glowRef = useRef(null); + + const handleMouseMove = (e) => { + if (!cardRef.current || !glowRef.current) return; + const rect = cardRef.current.getBoundingClientRect(); + glowRef.current.style.left = `${e.clientX - rect.left}px`; + glowRef.current.style.top = `${e.clientY - rect.top}px`; + }; + + return ( +
+
+ {children} +
+ ); + } + + return ( + +
+
+
+

+ open control plane docs + Bring the power of open-source to your enterprise! +

+
+ +
+
+
+
+
+ Crossplane Axolotl + Control Plane + {/* Cloud 1 - Purple/Blue */} + + + + + + + + + + + + + + + + {/* Badge: EU-gov - top left */} + + + EU-gov + + + + + {/* Sonar sweep line */} + + + + + + + + {/* User icon - top left */} + + + + {/* Database icon - top right */} + + + + + {/* Key icon - left */} + + + + + {/* Server icon - right */} + + + + + + {/* Network icon - bottom center */} + + + + + + + + + + + + + Control Plane + {/* Cloud 2-1 - Teal (left) */} + + + + + + + + + + + + + + + + {/* Badge: dev - top left */} + + + dev + + + + + {/* Sonar sweep line */} + + + + + + + + {/* CPU icon - top left */} + + + + + {/* Container/Docker icon - top right */} + + + + + {/* Hard Drive icon - left */} + + + + + {/* Settings icon - right */} + + + + + + + + + + {/* Cloud 2-2 - Pink (right) */} + + + + + + + + + + + + + + + + {/* Badge: prod - top right */} + + + prod + + + + + {/* Sonar sweep line */} + + + + + + + + {/* CPU icon - top left */} + + + + + {/* Container/Docker icon - top right */} + + + + + {/* Hard Drive icon - left */} + + + + + {/* Settings icon - right */} + + + + + + + + + + + Control Plane + {/* Cloud 3 - Orange */} + + + + + + + + + + + + + + + + {/* Badge: EU-public - top right */} + + + EU-public + + + + + {/* Sonar sweep line */} + + + + + + + + {/* CPU icon - top left */} + + + + + {/* User icon - top right */} + + + + {/* Container/Docker icon - left */} + + + + + {/* Memory/RAM icon - right */} + + + + + {/* Globe/Network icon - bottom center */} + + + + + + + + + +
+
+
+
+ +
+
+
+
+ + +

Everything in code

+

Define your entire cloud landscape using code. Always know exactly what's defined and leverage review-based workflows, version control, and much more.

+
+ + +

Continuous self-healing

+

Keep your landscape in sync. Crossplane continuously observes the desired and the actual state and reconciles any differences automatically.

+
+ + +

One syntax for all

+

+ Use a unified approach to define and manage resources across multiple cloud providers and services, reducing infrastructure complexity significantly. +

+
+ + +

Designed for reuse

+

+ Define your landscapes in modular building blocks using Crossplane Compositions or Helm charts. Replicate modules easily across different regions or stages. +

+
+ + +

Run a platform

+

+ Prebuild your own platform tailored to the specific needs of your organization and offer it to development teams in a self-service way. +

+
+ + +

Built for everyone

+

+ Whether you are a cloud expert or just getting started — our providers are designed to help everyone. + We run 100% open-source. +

+
+
+
+
+ +
+ + Start contributing + + + or explore{" "} + + our cloud native ecosystem + + +
+ +
+
+
+
+
100% open-source
+
+ Crossplane based on CNCF +

+ Through technical providers, we request, update and delete the cloud resources we want to + orchestrate. They allow us to describe environments in code. Most SAP cloud services do not + offer providers and are therefore not orchestrable yet. Since their APIs are public, we can build those + ourselves. +
+
+ We truly believe in open standards — our solutions are based on the{" "} + + open-source project Crossplane + + . Crossplane is recommended by the{" "} + + Cloud Native Computing Foundation (CNCF) + + . +
+
+ + We believe that no single team can achieve a fully orchestrated environment by writing providers on + their own. +
Only through collaboration we can make SAP's cloud services 100% orchestratable for everyone. +
+
+
+

+ + 💪 See our projects + +
+ + and learn how to contribute + +
+
+
+
+
+
+
+ ); +} \ No newline at end of file diff --git a/src/theme/Footer/index.js b/src/theme/Footer/index.js new file mode 100644 index 0000000..7e611ec --- /dev/null +++ b/src/theme/Footer/index.js @@ -0,0 +1,71 @@ +/* eslint-disable import/no-unresolved */ +import React from 'react'; +import useBaseUrl from '@docusaurus/useBaseUrl'; +import Link from '@docusaurus/Link'; + +export default function Footer() { + const bmwkEuImg = useBaseUrl('/img/BMWK-EU.png'); + + return ( +
+ {/* Row 1: EU Funding Banner */} +
+
+
+ EU and BMWK funding logos +
+
+

+ Funded by the European Union – NextGenerationEU. +

+

+ The views and opinions expressed are solely those of the author(s) and do not + necessarily reflect the views of the European Union or the European Commission. + Neither the European Union nor the European Commission can be held responsible + for them. +

+
+
+ + NeoNephos Logo + +
+
+
+ + {/* Row 2: Copyright */} +
+
+

+ Copyright © Linux Foundation Europe.{' '} + Open Control Plane is a project of the Open Component Model Community. For + applicable policies including privacy policy, terms of use and trademark usage + guidelines, please see{' '} + + https://linuxfoundation.eu + + . Linux is a registered trademark of Linus Torvalds. +

+
+
+ + {/* Row 3: Legal Links */} +
+
+ +
+
+
+ ); +} diff --git a/static/img/co_axolotl.png b/static/img/co_axolotl.png new file mode 100644 index 0000000..66a9ea7 Binary files /dev/null and b/static/img/co_axolotl.png differ diff --git a/static/img/co_axolotl.svg b/static/img/co_axolotl.svg new file mode 100644 index 0000000..0e25fe6 --- /dev/null +++ b/static/img/co_axolotl.svg @@ -0,0 +1,285 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/static/img/co_axolotl_mirrored.png b/static/img/co_axolotl_mirrored.png new file mode 100644 index 0000000..cc5e0c4 Binary files /dev/null and b/static/img/co_axolotl_mirrored.png differ diff --git a/static/img/co_axolotl_transparent.png b/static/img/co_axolotl_transparent.png new file mode 100644 index 0000000..b1270c8 Binary files /dev/null and b/static/img/co_axolotl_transparent.png differ diff --git a/static/img/contribution/Picture.png b/static/img/contribution/Picture.png new file mode 100644 index 0000000..b361d21 Binary files /dev/null and b/static/img/contribution/Picture.png differ diff --git a/static/img/contribution/Picture0.png b/static/img/contribution/Picture0.png new file mode 100644 index 0000000..ccc2655 Binary files /dev/null and b/static/img/contribution/Picture0.png differ diff --git a/static/img/contribution/Picture1.png b/static/img/contribution/Picture1.png new file mode 100644 index 0000000..8d002c0 Binary files /dev/null and b/static/img/contribution/Picture1.png differ diff --git a/static/img/contribution/contribution_open_github_actions.png b/static/img/contribution/contribution_open_github_actions.png new file mode 100644 index 0000000..1f3e4ac Binary files /dev/null and b/static/img/contribution/contribution_open_github_actions.png differ diff --git a/static/img/contribution/contribution_open_publish_release_candidate.png b/static/img/contribution/contribution_open_publish_release_candidate.png new file mode 100644 index 0000000..f8c2245 Binary files /dev/null and b/static/img/contribution/contribution_open_publish_release_candidate.png differ diff --git a/static/img/contribution/contribution_search_for_branch.png b/static/img/contribution/contribution_search_for_branch.png new file mode 100644 index 0000000..044cef6 Binary files /dev/null and b/static/img/contribution/contribution_search_for_branch.png differ diff --git a/static/img/contribution/contribution_select_package.png b/static/img/contribution/contribution_select_package.png new file mode 100644 index 0000000..45592e6 Binary files /dev/null and b/static/img/contribution/contribution_select_package.png differ diff --git a/static/img/contribution/contribution_select_your_image.png b/static/img/contribution/contribution_select_your_image.png new file mode 100644 index 0000000..9311a56 Binary files /dev/null and b/static/img/contribution/contribution_select_your_image.png differ diff --git a/static/img/contribution/create_issue.gif b/static/img/contribution/create_issue.gif new file mode 100644 index 0000000..bff2153 Binary files /dev/null and b/static/img/contribution/create_issue.gif differ diff --git a/static/img/contribution/github_discussion.png b/static/img/contribution/github_discussion.png new file mode 100644 index 0000000..91220e0 Binary files /dev/null and b/static/img/contribution/github_discussion.png differ diff --git a/static/img/cp1.png b/static/img/cp1.png new file mode 100644 index 0000000..c3d1fb3 Binary files /dev/null and b/static/img/cp1.png differ diff --git a/static/img/cp2.png b/static/img/cp2.png new file mode 100644 index 0000000..881cb62 Binary files /dev/null and b/static/img/cp2.png differ diff --git a/static/img/cp3.png b/static/img/cp3.png new file mode 100644 index 0000000..0cac594 Binary files /dev/null and b/static/img/cp3.png differ diff --git a/static/img/cp4.png b/static/img/cp4.png new file mode 100644 index 0000000..e675875 Binary files /dev/null and b/static/img/cp4.png differ diff --git a/static/img/docusaurus-social-card.jpg b/static/img/docusaurus-social-card.jpg deleted file mode 100644 index ffcb448..0000000 Binary files a/static/img/docusaurus-social-card.jpg and /dev/null differ diff --git a/static/img/docusaurus.png b/static/img/docusaurus.png deleted file mode 100644 index f458149..0000000 Binary files a/static/img/docusaurus.png and /dev/null differ diff --git a/static/img/favicon.ico b/static/img/favicon.ico index c01d54b..56bf504 100644 Binary files a/static/img/favicon.ico and b/static/img/favicon.ico differ diff --git a/static/img/icons/icon-align-dark.png b/static/img/icons/icon-align-dark.png new file mode 100644 index 0000000..fd13242 Binary files /dev/null and b/static/img/icons/icon-align-dark.png differ diff --git a/static/img/icons/icon-align.png b/static/img/icons/icon-align.png new file mode 100644 index 0000000..6202a8e Binary files /dev/null and b/static/img/icons/icon-align.png differ diff --git a/static/img/icons/icon-code-dark.png b/static/img/icons/icon-code-dark.png new file mode 100644 index 0000000..f3e0d5b Binary files /dev/null and b/static/img/icons/icon-code-dark.png differ diff --git a/static/img/icons/icon-code.png b/static/img/icons/icon-code.png new file mode 100644 index 0000000..fff2348 Binary files /dev/null and b/static/img/icons/icon-code.png differ diff --git a/static/img/icons/icon-platform-dark.png b/static/img/icons/icon-platform-dark.png new file mode 100644 index 0000000..a283bd2 Binary files /dev/null and b/static/img/icons/icon-platform-dark.png differ diff --git a/static/img/icons/icon-platform.png b/static/img/icons/icon-platform.png new file mode 100644 index 0000000..bb43e1c Binary files /dev/null and b/static/img/icons/icon-platform.png differ diff --git a/static/img/icons/icon-puzzle-dark.png b/static/img/icons/icon-puzzle-dark.png new file mode 100644 index 0000000..9ea7531 Binary files /dev/null and b/static/img/icons/icon-puzzle-dark.png differ diff --git a/static/img/icons/icon-puzzle.png b/static/img/icons/icon-puzzle.png new file mode 100644 index 0000000..4e6c16d Binary files /dev/null and b/static/img/icons/icon-puzzle.png differ diff --git a/static/img/icons/icon-reconcile-dark.png b/static/img/icons/icon-reconcile-dark.png new file mode 100644 index 0000000..f8f57d9 Binary files /dev/null and b/static/img/icons/icon-reconcile-dark.png differ diff --git a/static/img/icons/icon-reconcile.png b/static/img/icons/icon-reconcile.png new file mode 100644 index 0000000..1d6e440 Binary files /dev/null and b/static/img/icons/icon-reconcile.png differ diff --git a/static/img/icons/icon-simple-dark.png b/static/img/icons/icon-simple-dark.png new file mode 100644 index 0000000..9a2212b Binary files /dev/null and b/static/img/icons/icon-simple-dark.png differ diff --git a/static/img/icons/icon-simple.png b/static/img/icons/icon-simple.png new file mode 100644 index 0000000..88fddba Binary files /dev/null and b/static/img/icons/icon-simple.png differ diff --git a/static/img/landingpage-crossplane.png b/static/img/landingpage-crossplane.png new file mode 100644 index 0000000..0a09821 Binary files /dev/null and b/static/img/landingpage-crossplane.png differ diff --git a/static/img/logo.svg b/static/img/logo.svg deleted file mode 100644 index 9db6d0d..0000000 --- a/static/img/logo.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/static/img/logos/crossplane.png b/static/img/logos/crossplane.png new file mode 100644 index 0000000..94280b8 Binary files /dev/null and b/static/img/logos/crossplane.png differ diff --git a/static/img/logos/external-secrets.png b/static/img/logos/external-secrets.png new file mode 100644 index 0000000..49d1907 Binary files /dev/null and b/static/img/logos/external-secrets.png differ diff --git a/static/img/logos/flux.png b/static/img/logos/flux.png new file mode 100644 index 0000000..1efd6f8 Binary files /dev/null and b/static/img/logos/flux.png differ diff --git a/static/img/logos/gardener.png b/static/img/logos/gardener.png new file mode 100644 index 0000000..a845665 Binary files /dev/null and b/static/img/logos/gardener.png differ diff --git a/static/img/logos/kubernetes.png b/static/img/logos/kubernetes.png new file mode 100644 index 0000000..1a9f546 Binary files /dev/null and b/static/img/logos/kubernetes.png differ diff --git a/static/img/logos/kyverno.png b/static/img/logos/kyverno.png new file mode 100644 index 0000000..4e33936 Binary files /dev/null and b/static/img/logos/kyverno.png differ diff --git a/static/img/logos/landscaper.png b/static/img/logos/landscaper.png new file mode 100644 index 0000000..a845665 Binary files /dev/null and b/static/img/logos/landscaper.png differ diff --git a/static/img/logos/ocm.png b/static/img/logos/ocm.png new file mode 100644 index 0000000..88c7d30 Binary files /dev/null and b/static/img/logos/ocm.png differ diff --git a/static/img/logos/ocm.svg b/static/img/logos/ocm.svg new file mode 100644 index 0000000..f810b76 --- /dev/null +++ b/static/img/logos/ocm.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/static/img/neonephos.svg b/static/img/neonephos.svg new file mode 100644 index 0000000..3dd93e7 --- /dev/null +++ b/static/img/neonephos.svg @@ -0,0 +1,16 @@ + + + + + + + + diff --git a/static/img/undraw_docusaurus_mountain.svg b/static/img/undraw_docusaurus_mountain.svg deleted file mode 100644 index af961c4..0000000 --- a/static/img/undraw_docusaurus_mountain.svg +++ /dev/null @@ -1,171 +0,0 @@ - - Easy to Use - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/static/img/undraw_docusaurus_react.svg b/static/img/undraw_docusaurus_react.svg deleted file mode 100644 index 94b5cf0..0000000 --- a/static/img/undraw_docusaurus_react.svg +++ /dev/null @@ -1,170 +0,0 @@ - - Powered by React - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/static/img/undraw_docusaurus_tree.svg b/static/img/undraw_docusaurus_tree.svg deleted file mode 100644 index d9161d3..0000000 --- a/static/img/undraw_docusaurus_tree.svg +++ /dev/null @@ -1,40 +0,0 @@ - - Focus on What Matters - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -