Back to landing page
Platform

Platform Architecture

Management Plane, Runtime Clusters, GitOps delivery — how gh0stcloud is built.

gh0stcloud is split into two clearly separated layers: the Management Plane coordinates identity, secrets and billing, while Runtime Clusters run tenant workloads in isolation. The only permitted bridge between the two is GitOps.

Architecture principle: Desired state is written to Git, reconciled by Flux CD, and applied by the provisioning layer. Direct mutations to runtime objects are not a supported operational path.

Architecture overview

gh0stcloud platform architecture
Management services and runtime clusters are physically separated — GitOps is the controlled delivery bridge.

Cluster topology

EnvironmentRoleHardwareLocation
Management EnvironmentManagement Planededicated bare-metal serverGermany
Development EnvironmentDevelopment Runtimeshared compute nodes (control + worker)Germany
Production EnvironmentProduction Runtimeshared compute nodes + optional dedicated bare-metal serverGermany

All clusters run K3s with Cilium as CNI and are connected to a private VPN mesh via NetBird.

Management Plane

The Management Plane hosts all platform services and carries no tenant workloads:

ServiceFunction
KeycloakIdentity provider — SSO, OIDC, PKCE, OAuth2, tenant organisations
OpenBaoSecrets authority — credentials distributed to namespaces via ESO, never as plain-text
gh0stplanePlatform control API — billing aggregation and portal backend
Flux CDGitOps reconciliation engine

Keycloak manages tenant organisations as isolated identity boundaries. Each tenant gets a dedicated organisation with scoped groups and roles — no shared identity context across tenants.

OpenBao is the sole secrets authority. Workloads receive credentials exclusively through automatic synchronisation into their namespace — secrets are never stored in the repository or manually applied.

gh0stplane is the platform's business logic: billing aggregation from usage data, the portal API, and monthly invoice generation.

Runtime Clusters

Each runtime cluster runs the same service baseline:

ServiceFunction
Flux CDGitOps pull from the tenant source repository
Provisioning systemReconciles platform configuration and tenant namespaces
KyvernoAdmission policies, NetworkPolicy baseline, namespace guardrails
Prometheus + AlloyMetrics collection and log/trace forwarding
BeylaeBPF-based automatic application instrumentation — no code changes required

Each tenant gets at least one dedicated, isolated namespace. Network policies blocking cross-tenant traffic are applied automatically.

GitOps delivery — how changes flow

All changes follow a single, auditable path:

Source Repository
  └─► Flux CD
        └─► Provisioning layer
              └─► Namespace · RBAC · Policy · Workload

There is no click-ops path. Anyone who wants to change something in the cluster does it through a Git commit. The reconciliation order is strictly governed — from core services through to tenant workloads.

Network and connectivity

  • Internal: NetBird VPN mesh connects all clusters. No public exposure for internal APIs.
  • Tenant ingress: Via a controlled ingress controller per cluster, configurable per tenant.
  • Egress: Dedicated egress node for production, to keep outbound IPs manageable.

Open-source foundation

gh0stcloud is built entirely on CNCF and open-source projects: Kubernetes, K3s, Cilium, Flux CD, Keycloak, OpenBao, Prometheus, Grafana, Kyverno. Proprietary knowledge lives in the operations and automation layer — not in lock-in components.

Questions or ready to get started?

Talk to us