Skip to main content
Star us on GitHub Star

Install the Controller in Kubernetes

ziti-controller

Version: 1.1.5 Type: application AppVersion: 1.1.15

Host an OpenZiti controller in Kubernetes

Requirements

RepositoryNameVersion
https://charts.jetstack.iocert-manager~1.14.0
https://charts.jetstack.iotrust-manager~0.7.0
https://kubernetes.github.io/ingress-nginx/ingress-nginx~4.10.1

Overview

This chart runs a Ziti controller in Kubernetes. It uses the custom resources provided by cert-manager and trust-manager, i.e., Issuer, Certificate, and Bundle.

The client API must be published with a TLS passthrough Ingress, NodePort, or LoadBalancer. The ctrl plane and management API share the client API's TLS listener, so they're reached through the same address by default.

Requirements

Add the OpenZiti Charts Repo to Helm

helm repo add openziti https://docs.openziti.io/helm-charts/

Install Required Custom Resource Definitions

This chart requires declaring the Certificate, Issuer, and Bundle custom resource APIs before installation.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.crds.yaml
kubectl apply -f https://raw.githubusercontent.com/cert-manager/trust-manager/v0.9.0/deploy/crds/trust.cert-manager.io_bundles.yaml

Optional Sub-Charts

Ziti Controller requires Cert Manager and Trust Manager operators running in the cluster. You may use existing deployments of either or install either or both as sub-charts by setting additional input values on the command line.

--set cert-manager.enabled="true" --set trust-manager.enabled="true"

Or, as YAML:

cert-manager:
enabled: true
trust-manager:
enabled: true

Minimal Installation

This first example shows a minimal installation for a Kubernetes distribution that provides TLS pass-through for Service type LoadBalancer, e.g., k3s, k3d, Minikube. This is useful for environments where there's no cost, or justifiable cost, associated with provisioning a LoadBalancer with TLS passthrough.

You must supply one value when you install the chart.

KeyTypeDefaultDescription
clientApi.advertisedHoststringnilthe DNS name that edge clients and routers will resolve to reach this controller's edge client API
clientApi.advertisedPortstringnilthe TCP port associated with the advertisedHost to advertise to edge clients and routers
helm install \
--namespace ziti-controller ziti-controller-minimal1 \
openziti/ziti-controller \
--set clientApi.advertisedHost="ziti-controller-minimal.example.com" \
--set clientApi.advertisedPort="443"

A default admin user and password will be generated and saved to a secret during installation. The credentials can be retrieved using this command:

kubectl get secret \
-n ziti-controller ziti-controller-minimal1-admin-secret \
-o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'

Visit the Ziti Administration Console (ZAC): https://ziti-controller-minimal.example.com/zac/

You may log in the ziti CLI with one command or omit the -p part to prompt:

ziti edge login ziti-controller-minimal.example.com:1280 \
--yes \
--username admin \
--password $(
kubectl -n ziti-controller \
get secrets ziti-controller-minimal1-admin-secret \
-o go-template='{{index .data "admin-password" | base64decode }}'
)

Using ClusterIP Services with an Ingress Controller

The default K8s service type for this chart is ClusterIP. You can publish these cluster-internal services with an Ingress resource. You need an Ingress Controller. Here's an example of using the community ingress-nginx chart to provision ingresses for the controller's ClusterIP services.

Ensure you have the ingress-nginx chart installed with controller.extraArgs.enable-ssl-passthrough=true. You can verify this feature is enabled by running kubectl describe pods {ingress-nginx-controller pod} and checking the args for --enable-ssl-passthrough=true.

If necessary, patch the ingress-nginx deployment to enable TLS passthrough.

kubectl patch deployment "ingress-nginx-controller" \
--namespace ingress-nginx \
--type json \
--patch '[{"op": "add",
"path": "/spec/template/spec/containers/0/args/-",
"value":"--enable-ssl-passthrough"
}]'

Create a Helm chart values file like this.

# /tmp/controller-values.yml
clientApi:
advertisedHost: ziti-controller-managed.example.com
advertisedPort: 443
service:
type: ClusterIP
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Now install or upgrade this controller chart with your values file.

helm install \
--namespace ziti-controller ziti-controller-managed1 \
openziti/ziti-controller \
--values /tmp/controller-values.yml

Expose the Router Control Plane

This is applicable if you have any routers outside the Ziti controller's cluster. You must configure pass-through TLS LoadBalancer or Ingress for the control plane service. Routers running in the same cluster as the controller can use the cluster service named {controller release}-ctrl (the "ctrl" endpoint). This example demonstrates a pass-through Ingress resource for nginx-ingress.

Merge this with your Helm chart values file before installing or upgrading.

ctrlPlane:
advertisedHost: ziti-controller-managed-ctrl.example.com
advertisedPort: 443
service:
enabled: true
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Extra Security for the Management API

You can split the client and management APIs into separate cluster services by setting managementApi.service.enabled=true. With this configuration, you'll have an additional cluster service named {release}-mgmt that is the management API, and the client API will not have management features.

This Helm chart's values allow for both operational scenarios: combined and split. The default choice is to expose the combined client and management APIs as the cluster service named {release}-client, which is convenient because you can use the ziti CLI immediately. For additional security, you may shelter the management API by splitting these two sets of features, exposing them as separate API servers. After the split, you can access the management API in several ways:

  • deploy a tunneler to bind a Ziti service targeting {release}-mgmt.{namespace}.svc:{port}.
  • kubectl -n {namespace} port-forward deployments/{release}-mgmt 8443:{port}

The web console (ZAC) is always bound to the same web listener as the management API, so you can access it at that /zac/ path on the same URL.

Advanced PKI

The default configuration generates a singular PKI root of trust for all the controller's servers and the edge signer CA. Optionally, you may provide the name of a cert-manager Issuer or ClusterIssuer to become the root of trust for the Ziti controller's identity.

Merge this with your Helm chart values file before installing or upgrading.

ctrlPlane:
issuer:
kind: ClusterIssuer
name: my-alternative-cluster-issuer

You may also configure the Ziti controller to use separate PKI roots of trust for its three main identities: control plane, edge signer, and web bindings.

For example, to use a separate CA for the edge signer function, merge this with your Helm chart values file before installing or upgrading.

edgeSignerPki:
enabled: true

Prometheus Monitoring

This chart provides a default disabled ziti-controller-prometheus k8s service for prometheus, which can be enabled with prometheus.service.enabled. Enabling it will create a prometheus ServiceMonitor for configuring the prometheus endpoint. It is also important that you enable fabric.events.enabled for getting a full set of metrics.

For more information, please check here.

Values Reference

KeyTypeDefaultDescription
additionalConfigsobject{"ctrl":{},"events":{},"healthChecks":{},"network":{},"web":{}}Append additional config blocks in specific top-level keys: edge, web, network, ctrl. If events are defined here, they replace the default events section entirely.
additionalVolumeslist[]additional volumes to mount to ziti-controller container
affinityobject{}deployment template spec affinity
ca.clusterDomainstring"cluster.local"Set a custom cluster domain if other than cluster.local
ca.durationstring"87840h"Go time.Duration string format
ca.renewBeforestring"720h"Go time.Duration string format
cert-manager.enableCertificateOwnerRefbooltrueclean up secret when certificate is deleted
cert-manager.enabledboolfalseinstall the cert-manager subchart
cert-manager.installCRDsboolfalseCRDs must be applied in advance of installing the parent chart
cert.durationstring"87840h"server certificate duration as Go time.Duration string format
cert.renewBeforestring"720h"rewnew server certificates before expiry as Go time.Duration string format
clientApi.advertisedHoststringnilglobal DNS name by which routers can resolve a reachable IP for this service
clientApi.advertisedPortint443cluster service, node port, load balancer, and ingress port
clientApi.altIngress.advertisedHoststring""alternative ingress host, e.g., ziti.example.com
clientApi.altIngress.annotationsobject{}ingress annotations, e.g., to configure ingress-nginx
clientApi.altIngress.enabledboolfalsecreate an ingress for the client API's ClusterIP service with a trusted certificate for clients that require a trusted certificate, e.g., BrowZer, ZAC
clientApi.altIngress.ingressClassNamestring""ingress class name, e.g., "nginx"
clientApi.altIngress.labelsobject{}ingress labels
clientApi.altIngress.tlsobject{}deprecated: tls passthrough is required; configure an alternative certificate to project into the container in webBindingPki.altServerCerts
clientApi.containerPortint1280cluster service target port on the container
clientApi.dnsNameslist[]additional DNS SANs
clientApi.ingress.annotationsobject{}ingress annotations, e.g., to configure ingress-nginx
clientApi.ingress.enabledboolfalsecreate a TLS-passthrough ingress for the client API's ClusterIP service
clientApi.ingress.ingressClassNamestring""ingress class name, e.g., "nginx"
clientApi.ingress.labelsobject{}ingress labels
clientApi.ingress.tlsobject{}deprecated: tls passthrough is required
clientApi.service.enabledbooltruecreate a cluster service for the deployment
clientApi.service.typestring"LoadBalancer"expose the service as a ClusterIP, NodePort, or LoadBalancer
ctrlPlane.advertisedHoststring"{{ .Values.clientApi.advertisedHost }}"global DNS name by which routers can resolve a reachable IP for this service: default is cluster service DNS name which assumes all routers are inside the same cluster
ctrlPlane.advertisedPortstring"{{ .Values.clientApi.advertisedPort }}"cluster service, node port, load balancer, and ingress port
ctrlPlane.alternativeIssuerobject{}kind and name of alternative issuer for the controller's identity
ctrlPlane.containerPortstring"{{ .Values.clientApi.containerPort }}"cluster service target port on the container
ctrlPlane.dnsNameslist[]additional DNS SANs for the ctrl plane identity
ctrlPlane.ingress.annotationsobject{}ingress annotations, e.g., to configure ingress-nginx
ctrlPlane.ingress.enabledboolfalsecreate an ingress for the cluster service
ctrlPlane.ingress.ingressClassNamestring""ingress class name, e.g., "nginx"
ctrlPlane.ingress.labelsobject{}ingress labels
ctrlPlane.ingress.tlsobject{}deprecated: tls passthrough is required
ctrlPlane.service.enabledbooltruecreate a separate cluster service for the ctrl plane; enabling this requires you to also set the host and port for a separate ctrl plane TLS listener
ctrlPlane.service.typestring"ClusterIP"expose the service as a ClusterIP, NodePort, or LoadBalancer
ctrlPlaneCasBundle.namespaceSelectorobject{}namespaces where trust-manager will create the Bundle resource containing Ziti's trusted CA certs (default: empty means all namespaces)
customAdminSecretNamestring""set the admin user and password from a custom secret The custom admin secret must be of the following format: apiVersion: v1 kind: Secret metadata: name: myCustomAdminSecret type: Opaque data: admin-user: admin-password:
dbFilestring"ctrl.db"name of the BoltDB file
edgeSignerPki.admin_client_cert.durationstring"8760h"admin client certificate duration as Go time.Duration
edgeSignerPki.admin_client_cert.renewBeforestring"720h"renew admin client certificate before expiry as Go time.Duration
edgeSignerPki.enabledbooltruegenerate a separate PKI root of trust for the edge signer CA
envobject{}set name to value in containers' environment
envSecretsobject{}set secrets as environment variables in the container
fabric.events.enabledboolfalseenable fabric event logger and file handler
fabric.events.fileNamestring"fabric-events.json"
fabric.events.mountDirstring"/var/run/ziti"
fabric.events.network.intervalAgeThresholdstring"5s"matching interval age and reporting interval ensures coherent metrics from fabric events
fabric.events.network.metricsReportIntervalstring"5s"matching interval age and reporting interval ensures coherent metrics from fabric events
fabric.events.subscriptions[0].typestring"fabric.circuits"
fabric.events.subscriptions[1].typestring"fabric.links"
fabric.events.subscriptions[2].typestring"fabric.routers"
fabric.events.subscriptions[3].typestring"fabric.terminators"
fabric.events.subscriptions[4].metricFilterstring".*"
fabric.events.subscriptions[4].sourceFilterstring".*"
fabric.events.subscriptions[4].typestring"metrics"
fabric.events.subscriptions[5].typestring"edge.sessions"
fabric.events.subscriptions[6].typestring"edge.apiSessions"
fabric.events.subscriptions[7].typestring"fabric.usage"
fabric.events.subscriptions[7].versionint3
fabric.events.subscriptions[8].typestring"services"
fabric.events.subscriptions[9].intervalstring"5s"
fabric.events.subscriptions[9].typestring"edge.entityCounts"
highAvailability.modestring"standalone"Ziti controller HA mode
highAvailability.replicasint1Ziti controller HA swarm replicas
image.additionalArgslist[]additional arguments can be passed directly to the container to modify ziti runtime arguments
image.argslist["{{ include \"configMountDir\" . }}/ziti-controller.yaml"]args for the entrypoint command
image.commandlist["ziti","controller","run"]container entrypoint command
image.homeDirstring"/home/ziggy"homeDir for admin login shell must align with container image's ~/.bashrc for ziti CLI auto-complete to work
image.pullPolicystring"IfNotPresent"deployment image pull policy
image.repositorystring"docker.io/openziti/ziti-controller"container image repository for app deployment
image.tagstring""override the container image tag specified in the chart
ingress-nginx.controller.extraArgs.enable-ssl-passthroughstring"true"configure subchart ingress-nginx to enable the pass-through TLS feature
ingress-nginx.enabledboolfalseinstall the ingress-nginx subchart
managementApiobject{"advertisedHost":"{{ .Values.clientApi.advertisedHost }}","advertisedPort":"{{ .Values.clientApi.advertisedPort }}","containerPort":"{{ .Values.clientApi.containerPort }}","dnsNames":[],"ingress":{"annotations":{},"enabled":false,"ingressClassName":"","labels":{},"tls":{}},"service":{"enabled":false,"type":"ClusterIP"}}by default, there's no need for a separate cluster service, ingress, or load balancer for the management API because it shares a TLS listener with the client API, and is reachable at the same address and presents the same web identity cert; you may configure a separate service, ingress, load balancer, etc. for the management API by setting managementApi.service.enabled=true
managementApi.advertisedHoststring"{{ .Values.clientApi.advertisedHost }}"global DNS name by which routers can resolve a reachable IP for this service
managementApi.advertisedPortstring"{{ .Values.clientApi.advertisedPort }}"cluster service, node port, load balancer, and ingress port
managementApi.containerPortstring"{{ .Values.clientApi.containerPort }}"cluster service target port on the container
managementApi.dnsNameslist[]additional DNS SANs
managementApi.ingress.annotationsobject{}ingress annotations, e.g., to configure ingress-nginx
managementApi.ingress.enabledboolfalsecreate an ingress for the cluster service
managementApi.ingress.ingressClassNamestring""ingress class name, e.g., "nginx"
managementApi.ingress.labelsobject{}ingress labels
managementApi.ingress.tlsobject{}deprecated: tls passthrough is required
managementApi.service.enabledboolfalsecreate a cluster service for the deployment
managementApi.service.typestring"ClusterIP"expose the service as a ClusterIP, NodePort, or LoadBalancer
network.createCircuitRetriesint2createCircuitRetries controls the number of retries that will be attempted to create a path (and terminate it) for new circuits.
network.cycleSecondsint15Defines the period that the controller re-evaluates the performance of all of the circuits running on the network.
network.initialLinkLatencystring"65s"Sets the latency of link when it's first created. Will be overwritten as soon as latency from the link is actually reported from the routers. Defaults to 65 seconds.
network.minRouterCostint10Sets router minimum cost. Defaults to 10
network.pendingLinkTimeoutSecondsint10pendingLinkTimeoutSeconds controls how long we'll wait before creating a new link between routers where there isn't an established link, but a link request has been sent
network.routeTimeoutSecondsint10routeTimeoutSeconds controls the number of seconds the controller will wait for a route attempt to succeed.
network.routerConnectChurnLimitstring"1m"Sets how often a new control channel connection can take over for a router with an existing control channel connection Defaults to 1 minute
network.smart.rerouteCapint4Defines the hard upper limit of underperforming circuits that are candidates to be re-routed. If smart routing detects 100 circuits that are underperforming, and smart.rerouteCap is set to 1, and smart.rerouteFraction is set to 0.02, then the upper limit of circuits that will be re-routed in this cycleSeconds period will be limited to 1.
network.smart.rerouteFractionfloat0.02Defines the fractional upper limit of underperforming circuits that are candidates to be re-routed. If smart routing detects 100 circuits that are underperforming, and smart.rerouteFraction is set to 0.02, then the upper limit of circuits that will be re-routed in this cycleSeconds period will be limited to 2 (2% of 100).
nodeSelectorobject{}deployment template spec node selector
persistence.VolumeNamestring""PVC volume name
persistence.accessModestring"ReadWriteOnce"PVC access mode: ReadWriteOnce (concurrent mounts not allowed), ReadWriteMany (concurrent allowed)
persistence.annotationsobject{}annotations for the PVC
persistence.enabledbooltruerequired: place a storage claim for the BoltDB persistent volume
persistence.existingClaimstring""A manually managed Persistent Volume and Claim Requires persistence.enabled=true. If defined, PVC must be created manually before volume will be bound.
persistence.sizestring"2Gi"2GiB is enough for tens of thousands of entities, but feel free to make it larger
persistence.storageClassstring""Storage class of PV to bind. By default it looks for the default storage class. If the PV uses a different storage class, specify that here.
podAnnotationsobject{}annotations to apply to all pods deployed by this chart
podSecurityContextobject{"fsGroup":2171}deployment template spec security context
podSecurityContext.fsGroupint2171the GID of the group that should own any files created by the container, especially the BoltDB file
prometheus.advertisedHoststring""DNS name to advertise in place of the default internal cluster name built from the Helm release name
prometheus.advertisedPortint443cluster service, node port, load balancer, and ingress port
prometheus.containerPortint9090cluster service target port on the container
prometheus.service.annotationsobject{}
prometheus.service.enabledboolfalsecreate a cluster service for the deployment
prometheus.service.labelsobject{"app":"prometheus"}extra labels for matching only this service, ie. serviceMonitor
prometheus.service.typestring"ClusterIP"expose the service as a ClusterIP, NodePort, or LoadBalancer
prometheus.serviceMonitor.annotationsobject{}ServiceMonitor annotations
prometheus.serviceMonitor.enabledbooltrueIf enabled, and prometheus service is enabled, ServiceMonitor resources for Prometheus Operator are created
prometheus.serviceMonitor.intervalstringnilServiceMonitor scrape interval
prometheus.serviceMonitor.labelsobject{}Additional ServiceMonitor labels
prometheus.serviceMonitor.metricRelabelingslist[]ServiceMonitor relabel configs to apply to samples as the last step before ingestion https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig (defines metric_relabel_configs)
prometheus.serviceMonitor.namespacestringnilAlternative namespace for ServiceMonitor resources
prometheus.serviceMonitor.namespaceSelectorobject{}Namespace selector for ServiceMonitor resources
prometheus.serviceMonitor.relabelingslist[]ServiceMonitor relabel configs to apply to samples before scraping https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig (defines relabel_configs)
prometheus.serviceMonitor.schemestring"https"ServiceMonitor will use http by default, but you can pick https as well
prometheus.serviceMonitor.scrapeTimeoutstringnilServiceMonitor scrape timeout in Go duration format (e.g. 15s)
prometheus.serviceMonitor.targetLabelslist[]ServiceMonitor will add labels from the service to the Prometheus metric https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec
prometheus.serviceMonitor.tlsConfigobject{"insecureSkipVerify":true}ServiceMonitor will use these tlsConfig settings to make the health check requests
prometheus.serviceMonitor.tlsConfig.insecureSkipVerifybooltrueset TLS skip verify, because the SAN will not match with the pod IP
resourcesobject{}deployment container resources
securityContextobject{}deployment container security context
spireAgent.enabledboolfalseif you are running a container with the spire-agent binary installed then this will allow you to add the hostpath necessary for connecting to the spire socket
spireAgent.spireSocketMntstring"/run/spire/sockets"file path of the spire socket mount
tolerationslist[]deployment template spec tolerations
trust-manager.app.trust.namespacestring"{{ .Release.Namespace }}"trust-manager needs to be configured to trust the namespace in which the controller is deployed so that it will create the Bundle resource for the ctrl plane trust bundle
trust-manager.crds.enabledboolfalseCRDs must be applied in advance of installing the parent chart
trust-manager.enabledboolfalseinstall the trust-manager subchart
trustDomainstring""permanent SPIFFE ID to use for this controller's trust domain (default: random, fixed for the life of the chart release)
useCustomAdminSecretboolfalseallow for using a custom admin secret, which has to be created beforehand if enabled, the admin secret will not be generated by this Helm chart
webBindingPki.altServerCertslist[]
webBindingPki.enabledbooltruegenerate a separate PKI root of trust for web bindings, i.e., client, management, and prometheus APIs

TODO's

Alternative Web Server Certificates

The purpose of the alt_server_certs feature is to bind a publicly trusted server certificate to the controller's web listener. This is useful for publishing the controller's client API with a different DNS name for BrowZer and console clients that must verify the controller's identity with their OS trusted root store.

Request an alternative server certificate from a cert-manager issuer

The most automatic way to bind an alt cert is the certManager mode provided by this chart. This example implies you have separately created a cert-manager ClusterIssuer named "cloudflare-dns01-issuer" that is able to obtain a certificate for the specified DNS name. If publishing the client API's alternative DNS name as a separate Ingress, you may reference that advertised host when requesting the alternative server certificate as shown here with an inline template to ensure they match.

clientApi:
advertisedHost: edge.ziti.example.com
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service:
enabled: true
type: ClusterIP
altIngress:
enabled: true
ingressClassName: nginx
advertisedHost: alt-edge.ziti.example.com # this must be different from clientApi.advertisedHost and must match one of the dnsNames in the altServerCert
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"

webBindingPki:
enabled: true
altServerCerts:
- mode: certManager
secretName: my-alt-server-cert
dnsNames:
- "{{ .Values.clientApi.altIngress.advertisedHost }}"
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: cloudflare-dns01-issuer
mountPath: /etc/ziti/alt-server-cert

Use an alternative certificate and key from a tls secret

The alternative server certificate and key may also be provided from a Kubernetes TLS secret. Declare the tls secret in the additionalVolumes section and reference it in the altServerCerts section.

additionalVolumes:
- name: my-alt-server-cert
volumeType: secret
mountPath: /etc/ziti/my-alt-server-cert
secretName: my-alt-server-cert

webBindingPki:
altServerCerts:
- mode: secret
secretName: my-alt-server-cert