This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Koperator
The Koperator (formerly called Banzai Cloud Kafka Operator) is a Kubernetes operator to automate provisioning, management, autoscaling and operations of Apache Kafka clusters deployed to K8s.
Overview
Apache Kafka is an open-source distributed streaming platform, and some of the main features of the Koperator are:
- the provisioning of secure and production-ready Kafka clusters
- fine grained broker configuration support
- advanced and highly configurable External Access via LoadBalancers using Envoy
- graceful Kafka cluster scaling and rebalancing
- monitoring via Prometheus
- encrypted communication using SSL
- automatic reaction and self healing based on alerts (plugin system, with meaningful default alert plugins) using Cruise Control
- graceful rolling upgrade
- advanced topic and user management via CRD
The Koperator helps you create production-ready Apache Kafka cluster on Kubernetes, with scaling, rebalancing, and alerts based self healing.
Motivation
Apache Kafka predates Kubernetes and was designed mostly for static
on-premise environments. State management, node identity, failover, etc all come part and parcel with Kafka, so making it work properly on Kubernetes and on an underlying dynamic environment can be a challenge.
There are already several approaches to operating Apache Kafka on Kubernetes, however, we did not find them appropriate for use in a highly dynamic environment, nor capable of meeting our customers’ needs. At the same time, there is substantial interest within the Kafka community for a solution which enables Kafka on Kubernetes, both in the open source and closed source space.
We took a different approach to what’s out there - we believe for a good reason - please read on to understand more about our design motivations and some of the scenarios which were driving us to create the Koperator.
Finally, our motivation is to build an open source solution and a community which drives the innovation and features of this operator. We are long term contributors and active community members of both Apache Kafka and Kubernetes, and we hope to recreate a similar community around this operator.
Koperator features
Design motivations
Kafka is a stateful application. The first piece of the puzzle is the Broker, which is a simple server capable of creating/forming a cluster with other Brokers. Every Broker has his own unique configuration which differs slightly from all others - the most relevant of which is the unique broker ID.
All Kafka on Kubernetes operators use StatefulSet to create a Kafka Cluster. Just to quickly recap from the K8s docs:
StatefulSet manages the deployment and scaling of a set of Pods, and provide guarantees about their ordering and uniqueness. Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains sticky identities for each of its Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that is maintained across any rescheduling.
How does this look from the perspective of Apache Kafka?
With StatefulSet we get:
- unique Broker IDs generated during Pod startup
- networking between brokers with headless services
- unique Persistent Volumes for Brokers
Using StatefulSet we lose:
- the ability to modify the configuration of unique Brokers
- to remove a specific Broker from a cluster (StatefulSet always removes the most recently created Broker)
- to use multiple, different Persistent Volumes for each Broker
Koperator uses simple
Pods, ConfigMaps, and PersistentVolumeClaims, instead of StatefulSet. Using these resources allows us to build an Operator which is better suited to manage Apache Kafka.
With the Koperator you can:
- modify the configuration of unique Brokers
- remove specific Brokers from clusters
- use multiple Persistent Volumes for each Broker
Features
Fine Grained Broker Configuration Support
We needed to be able to react to events in a fine-grained way for each Broker - and not in the limited way StatefulSet does (which, for example, removes the most recently created Brokers). Some of the available solutions try to overcome these deficits by placing scripts inside the container to generate configurations at runtime, whereas the Koperator’s configurations are deterministically placed in specific Configmaps.
Graceful Kafka Cluster Scaling with the help of our CruiseControlOperation custom resource
We know how to operate Apache Kafka at scale (we are contributors and have been operating Kafka on Kubernetes for years now). We believe, however, that LinkedIn has even more experience than we do. To scale Kafka clusters both up and down gracefully, we integrated LinkedIn’s Cruise-Control to do the hard work for us. We already have good defaults (i.e. plugins) that react to events, but we also allow our users to write their own.
External Access via LoadBalancer
The Koperator externalizes access to Apache Kafka using a dynamically (re)configured Envoy proxy. Using Envoy allows us to use a single LoadBalancer, so there’s no need for a LoadBalancer for each Broker.
Communication via SSL
The operator fully automates Kafka’s SSL support.
The operator can provision the required secrets and certificates for you, or you can provide your own.
Monitoring via Prometheus
The Koperator exposes Cruise-Control and Kafka JMX metrics to Prometheus.
Reacting on Alerts
Koperator acts as a Prometheus Alert Manager. It receives alerts defined in Prometheus, and creates actions based on Prometheus alert annotations.
Currently, there are three default actions (which can be extended):
- upscale cluster (add a new Broker)
- downscale cluster (remove a Broker)
- add additional disk to a Broker
Graceful Rolling Upgrade
Operator supports graceful rolling upgrade, It means the operator will check if the cluster is healthy.
It basically checks if the cluster has offline partitions, and all the replicas are in sync.
It proceeds only when the failure threshold is smaller than the configured one.
The operator also allows to create special alerts on Prometheus, which affects the rolling upgrade state, by
increasing the error rate.
Dynamic Configuration Support
Kafka operates with three type of configurations:
- Read-only
- ClusterWide
- PerBroker
Read-only config requires broker restart to update all the others may be updated dynamically.
Operator CRD distinguishes these fields, and proceed with the right action. It can be a rolling upgrade, or
a dynamic reconfiguration.
Seamless Istio mesh support
- Operator allows to use ClusterIP services instead of Headless, which still works better in case of Service meshes.
- To avoid too early Kafka initialization, which might lead to unready sidecar container. The operator uses a small script to mitigate this behavior. Any Kafka image can be used with the only requirement of an available curl command.
- To access a Kafka cluster which runs inside the mesh. Operator supports creating Istio ingress gateways.
Apache Kafka, Kafka, and the Kafka logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.
1 - Install the operator
The operator installs version 3.1.0 of Apache Kafka, and can run on:
- Minikube v0.33.1+,
- Kubernetes 1.21-1.24, and
- Red Hat OpenShift 4.10-4.11.
The operator supports Kafka 2.6.2-3.1.x.
Prerequisites
- A Kubernetes cluster (minimum 6 vCPU and 10 GB RAM). Red Hat OpenShift is also supported in Koperator version 0.24 and newer, but note that it needs some permissions for certain components to function.
We believe in the separation of concerns
principle, thus the Koperator does not install nor manage Apache ZooKeeper or cert-manager.
Install Koperator and its requirements independently
Install cert-manager with Helm
Koperator uses cert-manager for issuing certificates to clients and brokers and cert-manager is required for TLS-encrypted client connections. It is recommended to deploy and configure a cert-manager instance if there is none in your environment yet.
Note:
- Koperator 0.24.0 and newer versions support cert-manager 1.10.0+ (which is a requirement for Red Hat OpenShift)
- Koperator 0.18.1 and newer supports cert-manager 1.5.3-1.9.x
- Koperator 0.8.x-0.17.0 supports cert-manager 1.3.x
-
Install cert-manager’s CustomResourceDefinitions.
kubectl apply \
--validate=false \
-f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml
Expected output:
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
-
If you are installing cert-manager on a Red Hat OpenShift version 4.10 cluster, the default security computing profile must be enabled for cert-manager to work.
-
Create a new SecurityContextConstraint
object named restricted-seccomp
which will be a copy of the OpenShift built-in restricted
SecurityContextConstraint
, but will also allow the runtime/default
/ RuntimeDefault
security computing profile according to the OpenShift documentation.
oc create -f - <<EOF
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities: null
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: MustRunAs
groups:
- system:authenticated
kind: SecurityContextConstraints
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
kubernetes.io/description: restricted denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. This is the most restrictive SCC and it is used by default for authenticated users.
release.openshift.io/create-only: "true"
name: restricted-seccomp # ~
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
type: MustRunAsRange
seLinuxContext:
type: MustRunAs
seccompProfiles: # +
- runtime/default # +
supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
EOF
Expected output:
securitycontextconstraints.security.openshift.io/restricted-seccomp created
-
Elevate the permissions of the namespace containing the cert-manager service account.
Expected output:
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:restricted-seccomp added: "system:serviceaccounts:{NAMESPACE_FOR_CERT_MANAGER_SERVICE_ACCOUNT}"
-
Install cert-manager.
helm install \
cert-manager \
--repo https://charts.jetstack.io cert-manager \
--version v1.11.0 \
--namespace cert-manager \
--create-namespace \
--atomic \
--debug
Expected output:
install.go:194: [debug] Original chart version: "v1.11.0"
install.go:211: [debug] CHART PATH: /Users/pregnor/.cache/helm/repository/cert-manager-v1.11.0.tgz
# ...
NAME: cert-manager
LAST DEPLOYED: Thu Mar 23 08:40:07 2023
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
# ...
NOTES:
cert-manager v1.11.0 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
-
Verify that cert-manager has been deployed and is in running state.
kubectl get pods -n cert-manager
Expected output:
NAME READY STATUS RESTARTS AGE
cert-manager-6b4d84674-4pkh4 1/1 Running 0 117s
cert-manager-cainjector-59f8d9f696-wpqph 1/1 Running 0 117s
cert-manager-webhook-56889bfc96-x8szj 1/1 Running 0 117s
Install zookeeper-operator with Helm
Koperator requires Apache Zookeeper for Kafka operations. You must:
- Deploy zookeeper-operator if your environment doesn’t have an instance of it yet.
- Create a Zookeeper cluster if there is none in your environment yet for your Kafka cluster.
Note: You are recommended to create a separate ZooKeeper deployment for each Kafka cluster. If you want to share the same ZooKeeper cluster across multiple Kafka cluster instances, use a unique zk path in the KafkaCluster CR to avoid conflicts (even with previous defunct KafkaCluster instances).
-
If you are installing zookeeper-operator on a Red Hat OpenShift cluster, elevate the permissions of the namespace containing the Zookeeper service account.
Expected output:
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "system:serviceaccounts:{NAMESPACE_FOR_ZOOKEEPER_SERVICE_ACCOUNT}"
-
Install ZooKeeper using the Pravega’s Zookeeper Operator.
helm install \
zookeeper-operator \
--repo https://charts.pravega.io zookeeper-operator \
--version 0.2.14 \
--namespace=zookeeper \
--create-namespace \
--atomic \
--debug
Expected output:
install.go:194: [debug] Original chart version: "0.2.14"
install.go:211: [debug] CHART PATH: /Users/pregnor/.cache/helm/repository/zookeeper-operator-0.2.14.tgz
# ...
NAME: zookeeper-operator
LAST DEPLOYED: Thu Mar 23 08:42:42 2023
NAMESPACE: zookeeper
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
# ...
-
Verify that zookeeper-operator has been deployed and is in running state.
kubectl get pods --namespace zookeeper
Expected output:
NAME READY STATUS RESTARTS AGE
zookeeper-operator-5857967dcc-gm5l5 1/1 Running 0 3m22s
Deploy a Zookeeper cluster for Kafka
-
Create a Zookeeper cluster.
-
Verify that Zookeeper has been deployed and is in running state with the configured number of replicas.
kubectl get pods -n zookeeper
Expected output:
NAME READY STATUS RESTARTS AGE
zookeeper-server-0 1/1 Running 0 27m
zookeeper-operator-54444dbd9d-2tccj 1/1 Running 0 28m
Install prometheus-operator with Helm
Koperator uses Prometheus for exporting metrics of the Kafka cluster. It is recommended to deploy a Prometheus instance if you don’t already have one.
-
If you are installing prometheus-operator on a Red Hat OpenShift version 4.10 cluster, create a SecurityContextConstraints
object nonroot-v2
with the following configuration for Prometheus admission and operator service accounts to work.
oc create -f - <<EOF
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities:
- NET_BIND_SERVICE
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups: []
kind: SecurityContextConstraints
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
kubernetes.io/description: nonroot provides all features of the restricted SCC but allows users to run with any non-root UID. The user must specify the UID or it must be specified on the by the manifest of the container runtime. On top of the legacy 'nonroot' SCC, it also requires to drop ALL capabilities and does not allow privilege escalation binaries. It will also default the seccomp profile to runtime/default if unset, otherwise this seccomp profile is required.
name: nonroot-v2
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- ALL
runAsUser:
type: MustRunAsNonRoot
seLinuxContext:
type: MustRunAs
seccompProfiles:
- runtime/default
supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
EOF
Expected output:
securitycontextconstraints.security.openshift.io/nonroot-v2 created
-
If you are installing prometheus-operator on a Red Hat OpenShift cluster, elevate the permissions of the Prometheus service accounts.
Note: OpenShift doesn’t let you install Prometheus in the default
namespace due to security considerations.
Expected output:
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 added: "{PROMETHEUS_ADMISSION_SERVICE_ACCOUNT_NAME}"
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 added: "{PROMETHEUS_OPERATOR_SERVICE_ACCOUNT_NAME}"
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork added: "{PROMETHEUS_NODE_EXPORTER_SERVICE_ACCOUNT_NAME}"
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:node-exporter added: "{PROMETHEUS_NODE_EXPORTER_SERVICE_ACCOUNT_NAME}"
-
Install the Prometheus operator and its CustomResourceDefinitions into the prometheus
namespace.
Expected output:
install.go:194: [debug] Original chart version: "45.7.1"
install.go:211: [debug] CHART PATH: /Users/pregnor/.cache/helm/repository/kube-prometheus-stack-45.7.1.tgz
# ...
NAME: prometheus
LAST DEPLOYED: Thu Mar 23 09:28:29 2023
NAMESPACE: prometheus
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
# ...
COMPUTED VALUES:
# ...
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace prometheus get pods -l "release=prometheus"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
-
Verify that prometheus-operator has been deployed and is in running state.
kubectl get pods -n prometheus
Expected output:
NAME READY STATUS RESTARTS AGE
prometheus-kube-prometheus-operator-646d5fd7d5-s72jn 1/1 Running 0 15m
Install Koperator with Helm
Koperator can be deployed using its Helm chart.
-
Install the Koperator CustomResourceDefinition resources (adjust the version number to the Koperator release you want to install). This is performed in a separate step to allow you to uninstall and reinstall Koperator without deleting your installed custom resources.
kubectl create \
--validate=false \
-f https://github.com/banzaicloud/koperator/releases/download/v0.25.1/kafka-operator.crds.yaml
Expected output:
customresourcedefinition.apiextensions.k8s.io/cruisecontroloperations.kafka.banzaicloud.io created
customresourcedefinition.apiextensions.k8s.io/kafkaclusters.kafka.banzaicloud.io created
customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.banzaicloud.io created
customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.banzaicloud.io created
-
If you are installing Koperator on a Red Hat OpenShift cluster:
-
Elevate the permissions of the Koperator namespace.
Expected output:
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "system:serviceaccounts:{NAMESPACE_FOR_KOPERATOR}"
-
If the Kafka cluster is going to run in a different namespace than Koperator, elevate the permissions of the Kafka cluster broker service account (ServiceAccountName
provided in the KafkaCluster custom resource).
oc adm policy add-scc-to-user anyuid system:serviceaccount:{NAMESPACE_FOR_KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT}:{KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT_NAME}
Expected output:
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "system:serviceaccount:{NAMESPACE_FOR_KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT}:{KAFKA_CLUSTER_BROKER_SERVICE_ACCOUNT_NAME}"
-
Install Koperator into the kafka namespace:
helm install \
kafka-operator \
--repo https://kubernetes-charts.banzaicloud.com kafka-operator \
--version 0.25.1 \
--namespace=kafka \
--create-namespace \
--atomic \
--debug
Expected output:
install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /Users/pregnor/development/src/github.com/banzaicloud/koperator/kafka-operator-0.25.1.tgz
# ...
NAME: kafka-operator
LAST DEPLOYED: Thu Mar 23 10:05:11 2023
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
# ...
-
Verify that Koperator has been deployed and is in running state.
kubectl get pods -n kafka
Expected output:
NAME READY STATUS RESTARTS AGE
kafka-operator-operator-8458b45587-286f9 2/2 Running 0 62s
Deploy a Kafka cluster
-
Create the Kafka cluster using the KafkaCluster custom resource. You can find various examples for the custom resource in Configure Kafka cluster and in the Koperator repository.
CAUTION:
After the cluster is created, you cannot change the way the listeners are configured without an outage. If a cluster is created with unencrypted (plain text) listener and you want to switch to SSL encrypted listeners (or the way around), you must manually delete each broker pod. The operator will restart the pods with the new listener configuration.
-
To create a sample Kafka cluster that allows unencrypted client connections, run the following command:
kubectl create \
-n kafka \
-f https://raw.githubusercontent.com/banzaicloud/koperator/v0.25.1/config/samples/simplekafkacluster.yaml
-
To create a sample Kafka cluster that allows TLS-encrypted client connections, run the following command. For details on the configuration parameters related to SSL, see Enable SSL encryption in Apache Kafka.
kubectl create \
-n kafka \
-f https://raw.githubusercontent.com/banzaicloud/koperator/v0.25.1/config/samples/simplekafkacluster_ssl.yaml
Expected output:
kafkacluster.kafka.banzaicloud.io/kafka created
-
Wait and verify that the Kafka cluster resources have been deployed and are in running state.
kubectl -n kafka get kafkaclusters.kafka.banzaicloud.io kafka --watch
Expected output:
NAME CLUSTER STATE CLUSTER ALERT COUNT LAST SUCCESSFUL UPGRADE UPGRADE ERROR COUNT AGE
kafka ClusterReconciling 0 0 5s
kafka ClusterReconciling 0 0 7s
kafka ClusterReconciling 0 0 8s
kafka ClusterReconciling 0 0 9s
kafka ClusterReconciling 0 0 2m17s
kafka ClusterReconciling 0 0 3m11s
kafka ClusterReconciling 0 0 3m27s
kafka ClusterReconciling 0 0 3m29s
kafka ClusterReconciling 0 0 3m31s
kafka ClusterReconciling 0 0 3m32s
kafka ClusterReconciling 0 0 3m32s
kafka ClusterRunning 0 0 3m32s
kafka ClusterReconciling 0 0 3m32s
kafka ClusterRunning 0 0 3m34s
kafka ClusterReconciling 0 0 4m23s
kafka ClusterRunning 0 0 4m25s
kafka ClusterReconciling 0 0 4m25s
kafka ClusterRunning 0 0 4m27s
kafka ClusterRunning 0 0 4m37s
kafka ClusterReconciling 0 0 4m37s
kafka ClusterRunning 0 0 4m39s
kubectl get pods -n kafka
Expected output:
kafka-0-9brj4 1/1 Running 0 94s
kafka-1-c2spf 1/1 Running 0 93s
kafka-2-p6sg2 1/1 Running 0 92s
kafka-cruisecontrol-776f49fdbb-rjhp8 1/1 Running 0 51s
kafka-operator-operator-7d47f65d86-2mx6b 2/2 Running 0 13m
-
If prometheus-operator is deployed, create a Prometheus instance and corresponding ServiceMonitors for Koperator.
kubectl create \
-n kafka \
-f https://raw.githubusercontent.com/banzaicloud/koperator/v0.25.1/config/samples/kafkacluster-prometheus.yaml
Expected output:
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
prometheus.monitoring.coreos.com/kafka-prometheus created
prometheusrule.monitoring.coreos.com/kafka-alerts created
serviceaccount/prometheus created
servicemonitor.monitoring.coreos.com/cruisecontrol-servicemonitor created
servicemonitor.monitoring.coreos.com/kafka-servicemonitor created
-
Wait and verify that the Kafka cluster Prometheus instance has been deployed and is in running state.
kubectl get pods -n kafka
Expected output:
NAME READY STATUS RESTARTS AGE
kafka-0-nvx8c 1/1 Running 0 16m
kafka-1-swps9 1/1 Running 0 15m
kafka-2-lppzr 1/1 Running 0 15m
kafka-cruisecontrol-fb659b84b-7cwpn 1/1 Running 0 15m
kafka-operator-operator-8bb75c7fb-7w4lh 2/2 Running 0 17m
prometheus-kafka-prometheus-0 2/2 Running 0 16m
Test your deployment
2 - Upgrade the operator
When upgrading your Koperator deployment to a new version, complete the following steps.
-
Download the CRDs for the new release from the Koperator releases page. They are included in the assets of the release.
CAUTION:
Hazard of data loss Do not delete the old CRD from the cluster. Deleting the CRD removes your Kafka cluster.
-
Replace the KafkaCluster CRD with the new one on your cluster by running the following command (replace <versionnumber> with the release you are upgrading to, for example, v0.14.0).
kubectl replace --validate=false -f https://github.com/banzaicloud/koperator/releases/download/<versionnumber>/kafka-operator.crds.yaml
-
Update your Koperator deployment by running:
helm repo update
helm upgrade kafka-operator --namespace=kafka banzaicloud-stable/kafka-operator
3 - Test provisioned Kafka Cluster
Create Topic
Topic creation by default is enabled in Apache Kafka, but if it is configured otherwise, you’ll need to create a topic first.
-
You can use the KafkaTopic
CR to create a topic called my-topic like this:
Note: The previous command will fail if the cluster has not finished provisioning.
Expected output:
kafkatopic.kafka.banzaicloud.io/my-topic created
-
To create a sample topic from the CLI you can run the following:
For internal listeners exposed by a headless service (KafkaCluster.spec.headlessServiceEnabled
set to true
):
kubectl -n kafka run kafka-topics -it --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-headless.kafka:29092 --topic my-topic --create --partitions 1 --replication-factor 1
For internal listeners exposed by a regular service (KafkaCluster.spec.headlessServiceEnabled
set to false
):
kubectl -n kafka run kafka-topics -it --image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-all-broker.kafka:29092 --topic my-topic --create --partitions 1 --replication-factor 1
After you have created a topic, produce and consume some messages:
Send and receive messages without SSL within a cluster
You can use the following commands to send and receive messages within a Kubernetes cluster when SSL encryption is disabled for Kafka.
-
Produce messages:
-
Start the producer container
kubectl run \
-n kafka \
kafka-producer \
-it \
--image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 \
--rm=true \
--restart=Never \
-- \
/opt/kafka/bin/kafka-console-producer.sh \
--bootstrap-server kafka-headless:29092 \
--topic my-topic
-
Wait for the producer container to run, this may take a couple seconds.
Expected output:
If you don't see a command prompt, try pressing enter.
-
Press enter to get a command prompt.
Expected output:
-
Type your messages and press enter, each line will be sent through Kafka.
Example:
-
Stop the container. (You can CTRL-D out of it.)
Expected output:
pod "kafka-producer" deleted
-
Consume messages:
-
Start the consumer container.
kubectl run \
-n kafka \
kafka-consumer \
-it \
--image=ghcr.io/banzaicloud/kafka:2.13-3.1.0 \
--rm=true \
--restart=Never \
-- \
/opt/kafka/bin/kafka-console-consumer.sh \
--bootstrap-server kafka-headless:29092 \
--topic my-topic \
--from-beginning
-
Wait for the consumer container to run, this may take a couple seconds.
Expected output:
If you don't see a command prompt, try pressing enter.
-
The messages sent by the producer should be displayed here.
Example:
-
Stop the container. (You can CTRL-C out of it.)
Expected output:
Processed a total of 3 messages
pod "kafka-consumer" deleted
pod kafka/kafka-consumer terminated (Error)
Send and receive messages with SSL within a cluster
You can use the following procedure to send and receive messages within a Kubernetes cluster when SSL encryption is enabled for Kafka. To test a Kafka instance secured by SSL we recommend using kcat.
To use the java client instead of kcat, generate the proper truststore and keystore using the official docs.
-
Create a Kafka user. The client will use this user account to access Kafka. You can use the KafkaUser custom resource to customize the access rights as needed. For example:
-
To use Kafka inside the cluster, create a Pod which contains kcat
. Create a kafka-test
pod in the kafka
namespace. Note that the value of the secretName parameter must be the same as you used when creating the KafkaUser resource, for example, example-kafkauser-secret.
-
Wait until the pod is created, then exec into the container:
kubectl exec -it -n kafka kafka-test -- sh
-
Run the following command to check that you can connect to the brokers.
kcat -L -b kafka-headless:29092 -X security.protocol=SSL -X ssl.key.location=/ssl/certs/tls.key -X ssl.certificate.location=/ssl/certs/tls.crt -X ssl.ca.location=/ssl/certs/ca.crt
The first line of the output should indicate that the communication is encrypted, for example:
Metadata for all topics (from broker -1: ssl://kafka-headless:29092/bootstrap):
-
Produce some test messages. Run:
kcat -P -b kafka-headless:29092 -t my-topic \
-X security.protocol=SSL \
-X ssl.key.location=/ssl/certs/tls.key \
-X ssl.certificate.location=/ssl/certs/tls.crt \
-X ssl.ca.location=/ssl/certs/ca.crt
And type some test messages.
-
Consume some messages.
The following command will use the certificate provisioned with the cluster to connect to Kafka. If you’d like to create and use a different user, create a KafkaUser
CR, for details, see the SSL documentation.
kcat -C -b kafka-headless:29092 -t my-topic \
-X security.protocol=SSL \
-X ssl.key.location=/ssl/certs/tls.key \
-X ssl.certificate.location=/ssl/certs/tls.crt \
-X ssl.ca.location=/ssl/certs/ca.crt
You should see the messages you have created.
Send and receive messages outside a cluster
Prerequisites
-
Producers and consumers that are not in the same Kubernetes cluster can access the Kafka cluster only if an external listener is configured in your KafkaCluster CR. Check that the listenersConfig.externalListeners section exists in the KafkaCluster CR.
-
Obtain the external address and port number of the cluster by running the following commands.
-
If the external listener of your Kafka cluster accepts encrypted connections, proceed to SSL enabled. Otherwise, proceed to SSL disabled.
SSL disabled
-
Produce some test messages on the the external client.
-
If you have kcat installed, run:
kcat -P -b $SERVICE_IP:$SERVICE_PORT -t my-topic
-
If you have the Java Kafka client installed, run:
kafka-console-producer.sh --bootstrap-server $SERVICE_IP:$SERVICE_PORT --topic my-topic
And type some test messages.
-
Consume some messages.
-
If you have kcat installed, run:
kcat -C -b $SERVICE_IP:$SERVICE_PORT -t my-topic
-
If you have the Java Kafka client installed, run:
kafka-console-consumer.sh --bootstrap-server $SERVICE_IP:$SERVICE_PORT --topic my-topic --from-beginning
You should see the messages you have created.
SSL enabled
You can use the following procedure to send and receive messages from an external host that is outside a Kubernetes cluster when SSL encryption is enabled for Kafka. To test a Kafka instance secured by SSL we recommend using kcat.
To use the java client instead of kcat, generate the proper truststore and keystore using the official docs.
-
Install kcat.
-
MacOS:
-
Ubuntu:
apt-get update
apt-get install kcat
-
Connect to the Kubernetes cluster that runs your Kafka deployment.
-
Create a Kafka user. The client will use this user account to access Kafka. You can use the KafkaUser custom resource to customize the access rights as needed. For example:
-
Download the certificate and the key of the user, and the CA certificate used to verify the certificate of the Kafka server. These are available in the Kubernetes Secret created for the KafkaUser.
kubectl get secrets -n kafka <name-of-the-user-secret> -o jsonpath="{['data']['tls\.crt']}" | base64 -D > client.crt.pem
kubectl get secrets -n kafka <name-of-the-user-secret> -o jsonpath="{['data']['tls\.key']}" | base64 -D > client.key.pem
kubectl get secrets -n kafka <name-of-the-user-secret> -o jsonpath="{['data']['ca\.crt']}" | base64 -D > ca.crt.pem
-
Copy the downloaded certificates to a location that is accessible to the external host.
-
If you haven’t done so already, obtain the external address and port number of the cluster.
-
Produce some test messages on the host that is outside your cluster.
kcat -b $SERVICE_IP:$SERVICE_PORT -P -X security.protocol=SSL \
-X ssl.key.location=client.key.pem \
-X ssl.certificate.location=client.crt.pem \
-X ssl.ca.location=ca.crt.pem \
-t my-topic
And type some test messages.
-
Consume some messages.
kcat -b $SERVICE_IP:$SERVICE_PORT -C -X security.protocol=SSL \
-X ssl.key.location=client.key.pem \
-X ssl.certificate.location=client.crt.pem \
-X ssl.ca.location=ca.crt.pem \
-t my-topic
You should see the messages you have created.
4 - CruiseControlOperation to manage Cruise Control
Koperator version 0.22 introduces the CruiseControlOperation
custom resource. Koperator executes the Cruise Control related task based on the state of the CruiseControlOperation
custom resource. This gives you better control over Cruise Control, improving reliability, configurability, and observability.
Overview
When a broker is added or removed from the Kafka cluster or when new storage is added for a broker, Koperator creates a CruiseControlOperation
custom resource.
This custom resource describes a task that Cruise Control executes to move the partitions.
Koperator watches the created CruiseControlOperation
custom resource and updates its state based on the result of the Cruise Control task.
Koperator can re-execute the task if it fails.
Cruise Control can execute only one task at a time, so the priority of the tasks depends on the type of the operation:
- Upscale operations are executed first, then
- downscale operations, then
- rebalance operations.
The following Cruise Control tasks are supported:
You can follow the progress of the operation through the KafkaCluster
custom resource’s status and through the CruiseControlOperation
custom resource’s status.
The following example shows the steps of an add_broker
(GracefulUpscale*
) operation, but the same applies for the Kafka cluster remove_broker
(GracefulDownScale*
) and rebalance
(when the volumeState
is GracefulDiskRebalance*
) operations.
-
Upscale the Kafka cluster by adding a new broker with id “3” into the KafkaCluster
CR:
spec:
...
brokers:
- id: 0
brokerConfigGroup: "default"
- id: 1
brokerConfigGroup: "default"
- id: 2
brokerConfigGroup: "default"
- id: 3
brokerConfigGroup: "default"
...
-
A new broker pod is created and the cruiseControlOperationReference
is added to the KafkaCluster
status.
This is the reference of the created CruiseControlOperation
custom resource.
The cruiseControlState
shows the CruiseControlOperation
state: GracefulUpscaleScheduled
, meaning that CruiseControlOperation
has been created and is waiting for the add_broker
task to be finished.
status:
...
brokersState:
"3":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-addbroker-mhh72
cruiseControlState: GracefulUpscaleScheduled
volumeStates:
/kafka-logs:
cruiseControlOperationReference:
name: kafka-rebalance-h6ntt
cruiseControlVolumeState: GracefulDiskRebalanceScheduled
/kafka-logs2:
cruiseControlOperationReference:
name: kafka-rebalance-h6ntt
cruiseControlVolumeState: GracefulDiskRebalanceScheduled
...
-
The add_broker
Cruise Control task is in progress:
status:
...
brokersState:
"3":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-addbroker-mhh72
cruiseControlState: GracefulUpscaleRunning
...
-
When the add_broker
Cruise Control task is completed:
status:
...
brokersState:
"3":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-addbroker-mhh72
cruiseControlState: GracefulUpscaleSucceeded
...
There are two other possible states of cruiseControlState
, GracefulUpscaleCompletedWithError
and GracefulUpscalePaused
.
-
GracefulUpscalePaused
is a special state. For details, see Control the created CruiseControlOperation.
-
The GracefulUpscaleCompletedWithError
occurs when the Cruise Control task fails. If the cruiseControlOperation.spec.errorPolicy
is set to retry
(which is the default value), Koperator re-executes the failed task every 30s
until it succeeds. During the re-execution the cruiseControlState
returns to GracefulUpscaleRunning
.
status:
...
brokersState:
"3":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-addbroker-mhh72
cruiseControlState: GracefulUpscaleCompletedWithError
...
CruiseControlOperation CR overview
The kafka-addbroker-mhh72 CruiseControlOperation
custom resource from the previous example looks like:
kind: CruiseControlOperation
metadata:
...
name: kafka-addbroker-mhh72
...
spec:
...
status:
currentTask:
finished: "2022-11-18T09:31:40Z"
httpRequest: http://kafka-cruisecontrol-svc.kafka.svc.cluster.local:8090/kafkacruisecontrol/add_broker?allow_capacity_estimation=true&brokerid=3&data_from=VALID_WINDOWS&dryrun=false&exclude_recently_demoted_brokers=true&exclude_recently_removed_brokers=true&json=true&use_ready_default_goals=true
httpResponseCode: 200
id: 222e30f0-1e7a-4c87-901c-bed2854d69b7
operation: add_broker
parameters:
brokerid: "3"
exclude_recently_demoted_brokers: "true"
exclude_recently_removed_brokers: "true"
started: "2022-11-18T09:30:48Z"
state: Completed
summary:
Data to move: "0"
Intra broker data to move: "0"
Number of intra broker replica movements: "0"
Number of leader movements: "0"
Number of replica movements: "36"
Provision recommendation: '[ReplicaDistributionGoal] Remove at least 4 brokers.'
Recent windows: "1"
errorPolicy: retry
retryCount: 0
- The
status.currentTask
describes the Cruise Control task. - The
httpRequest
field contains the whole POST HTTP request that has been executed. - The
id
is the Cruise Control task identifier number. - The
state
shows the progress of the request. - The
summary
is Cruise Control’s optimization proposal. It shows the scope of the changes that Cruise Control will apply through the operation. - The
retryCount
field shows the number of retries when a task has failed and cruiseControlOperation.spec.errorPolicy
is set to retry
. In this case, the status.failedTask
field shows the history of the failed tasks (including their error messages).
For further information on the fields, see the source code.
Control the created CruiseControlOperation
Stop a task
The task execution can be stopped gracefully when the CruiseControlOperation
is deleted. In this case the corresponding cruiseControlState
or the cruiseControlVolumeState
will transition to Graceful*Succeeded
.
Handle failed tasks
cruiseControlOperation.spec.errorPolicy
defines how the failed Cruise Control task should be handled. When the errorPolicy
is set to retry
, Koperator re-executes the failed task every 30 seconds. When it is set to ignore
, Koperator treats the failed task as completed, thus the cruiseControlState
or the cruiseControlVolumeState
transitions to Graceful*Succeeded
.
Pause a task
When there is a Cruise Control task which can not be completed without an error and the cruiseControlOperation.spec.errorPolicy
is set to retry
, Koperator will re-execute the task until it succeeds. You can pause automatic re-execution by adding the following label on the corresponding CruiseControlOperation
custom resource. For details see this example. To continue the task, remove the label (or set to any other value than true
).
Pausing is useful when the reason of the error can not be fixed any time soon but you want to retry the operation later when the problem is resolved.
A paused CruiseControlOperation
tasks are ignored when selecting operations for execution: when a new CruiseControlOperation
with the same operation type (status.currentTask.operation
) is created, the new one is executed and the paused one is skipped.
kind: CruiseControlOperation
metadata:
...
name: kafka-addbroker-mhh72
labels:
pause: "true"
...
Automatic cleanup
You can set automatic cleanup time for the created CruiseControlOperations
in the KafkaCluster
custom resource.
In the following example, the finished (completed successfully or completedWithError
and errorPolicy: ignore
) CruiseControlOperation
custom resources are automatically deleted after 300 seconds.
apiVersion: kafka.banzaicloud.io/v1beta1
kind: KafkaCluster
...
spec:
...
cruiseControlConfig:
cruiseControlOperationSpec:
ttlSecondsAfterFinished: 300
...
Example for the ignore and pause use-cases
This example shows how to ignore and pause an operation.
-
Using the original example with four Kafka brokers from the Overview as the starting point, this example removes two brokers at the same time by editing the KafkaCluster
custom resource and deleting broker 2 and broker 3.
Spec:
...
brokers:
- id: 0
brokerConfigGroup: "default"
- id: 1
brokerConfigGroup: "default"
-
The brokers (kafka-removebroker-lg7qm
, kafka-removebroker-4plfq
) will have separate remove_broker
operations. The example shows that the first one is already in running state.
status:
...
brokersState:
"2":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-lg7qm
cruiseControlState: GracefulDownscaleRunning
...
"3":
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-4plfq
cruiseControlState: GracefulDownscaleScheduled
...
-
Assume that something unexpected happened, so the remove_broker
operation enters the GracefulDownscaleCompletedWithError
state.
status:
...
brokersState:
"2":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-lg7qm
cruiseControlState: GracefulDownscaleCompletedWithError
...
"3":
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-4plfq
cruiseControlState: GracefulDownscaleScheduled
...
-
At this point, you can decide how to handle this problem using one of the three possible options: retry it (which is the default behavior), ignore the error, or use the pause
label to pause the operation and let Koperator execute the next operation.
-
Ignore use-case: To ignore the error, set the cruiseControlOperation.spec.errorPolicy
field to ignore
. The operation will be considered as a successful operation, and the broker pod and the persistent volume will be removed from the Kubernetes cluster and from the KafkaCluster
status. Koperator will continue to execute the next task: remove_broker
for kafka-removebroker-4plfq
.
status:
...
brokersState:
...
"3":
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-4plfq
cruiseControlState: GracefulDownscaleRunning
...
-
Pause use-case: To pause this task, add the pause: true
label to the kafka-removebroker-lg7qm
CruiseControlOperation
. Koperator won’t try to re-execute this task, and moves on to the next remove_broker
operation.
status:
...
brokersState:
"2":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-lg7qm
cruiseControlState: GracefulDownscalePaused
...
"3":
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-4plfq
cruiseControlState: GracefulDownscaleRunning
...
When the second remove_broker
operation is finished, only the paused task remains:
status:
...
brokersState:
"2":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-lg7qm
cruiseControlState: GracefulDownscalePaused
...
When the problem has been resolved, you can retry removing broker 2 by removing the pause
label.
status:
...
brokersState:
"2":
...
gracefulActionState:
cruiseControlOperationReference:
name: kafka-removebroker-lg7qm
cruiseControlState: GracefulDownscaleRunning
...
If everything goes well, the broker is removed.
5 - Configure Kafka cluster
Koperator provides convenient ways of configuring Kafka resources through Kubernetes custom resources.
Overview
The KafkaCluster custom resource is the main configuration resource for the Kafka clusters.
It defines the Apache Kafka cluster properties, like Kafka brokers and listeners configurations.
By deploying the KafkaCluster custom resource, Koperator sets up your Kafka cluster.
You can change your Kafka cluster properties by updating the KafkaCluster custom resource.
The KafkaCluster custom resource always reflects to your Kafka cluster: when something has changed in your KafkaCluster custom resource, Koperator reconciles the changes to your Kafka cluster.
For the CRD reference, see CRD.
5.1 - CRD
The following sections contain the reference documentation of the various custom resource definitions (CRDs) that are specific to Koperator.
For sample YAML files, see the samples directory in the GitHub project.
5.1.1 - KafkaCluster CRD schema reference (group kafka.banzaicloud.io)
KafkaCluster is the Schema for the kafkaclusters API
KafkaCluster
KafkaCluster is the Schema for the kafkaclusters API
- Full name:
- kafkaclusters.kafka.banzaicloud.io
- Group:
- kafka.banzaicloud.io
- Singular name:
- kafkacluster
- Plural name:
- kafkaclusters
- Scope:
- Namespaced
- Versions:
- v1beta1
Version v1beta1
Properties
object
KafkaClusterSpec defines the desired state of KafkaCluster
array
Custom ports to expose in the container. Example use case: a custom kafka distribution, that includes an integrated metrics api endpoint
object
ContainerPort represents a network port in a single container.
integer
Required
Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.
string
What host IP to bind the external port to.
integer
Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
string
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
string
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.
object
AlertManagerConfig defines configuration for alert manager
integer
DownScaleLimit the limit for auto-downscaling the Kafka cluster. Once the size of the cluster (number of brokers) reaches or falls below this limit the auto-downscaling triggered by alerts is disabled until the cluster size exceeds this limit. This limit is not enforced if this field is omitted or is <= 0.
integer
UpScaleLimit the limit for auto-upscaling the Kafka cluster. Once the size of the cluster (number of brokers) reaches or exceeds this limit the auto-upscaling triggered by alerts is disabled until the cluster size falls below this limit. This limit is not enforced if this field is omitted or is <= 0.
object
Broker defines the broker basic configuration
object
BrokerConfig defines the broker configuration
object
Any definition received through this field will override the default behaviour of OneBrokerPerNode flag and the operator supposes that the user is aware of how scheduling is done by kubernetes Affinity could be set through brokerConfigGroups definitions and can be set for individual brokers as well where letter setting will override the group setting
object
Describes node affinity scheduling rules for the pod.
array
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
object
An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
object
Required
A node selector term, associated with the corresponding weight.
array
A list of node selector requirements by node’s labels.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
array
A list of node selector requirements by node’s fields.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
integer
Required
Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
object
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
array
Required
Required. A list of node selector terms. The terms are ORed.
object
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
array
A list of node selector requirements by node’s labels.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
array
A list of node selector requirements by node’s fields.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
object
Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
array
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
object
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
object
Required
Required. A pod affinity term, associated with the corresponding weight.
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
integer
Required
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
array
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
object
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
object
Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
array
The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
object
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
object
Required
Required. A pod affinity term, associated with the corresponding weight.
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
integer
Required
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
array
If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
object
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
object
Custom annotations for the broker pods - e.g.: Prometheus scraping annotations: prometheus.io/scrape: “true” prometheus.io/port: “9020”
array
BrokerIngressMapping allows to set specific ingress to a specific broker mappings. If left empty, all broker will inherit the default one specified under external listeners config Only used when ExternalListeners.Config is populated
object
Custom labels for the broker pods, example use case: for Prometheus monitoring to capture the group for each broker as a label, e.g.: kafka_broker_group: “default_group” these labels will not override the reserved labels that the operator relies on, for example, “app”, “brokerId”, and “kafka_cr”
array
Containers add extra Containers to the Kafka broker pod
object
A single application container that you want to run within a pod.
array
Arguments to the entrypoint. The container image’s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
array
Entrypoint array. Not executed within a shell. The container image’s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
array
List of environment variables to set in the container. Cannot be updated.
array
List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
object
EnvFromSource represents the source of a set of ConfigMaps
object
The ConfigMap to select from
boolean
Specify whether the ConfigMap must be defined
string
An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
object
The Secret to select from
boolean
Specify whether the Secret must be defined
object
EnvVar represents an environment variable present in a Container.
string
Required
Name of the environment variable. Must be a C_IDENTIFIER.
string
Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.
object
Source for the environment variable’s value. Cannot be used if value is not empty.
object
Selects a key of a ConfigMap.
boolean
Specify whether the ConfigMap or its key must be defined
object
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
Selects a key of a secret in the pod’s namespace
string
Required
The key of the secret to select from. Must be a valid secret key.
boolean
Specify whether the Secret or its key must be defined
object
Actions that the management system should take in response to container lifecycle events. Cannot be updated.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
object
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
object
PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod’s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod’s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
object
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
string
Required
Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
array
List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
object
ContainerPort represents a network port in a single container.
integer
Required
Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.
string
What host IP to bind the external port to.
integer
Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
string
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
string
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
boolean
AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.
object
The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.
string
Capability represent POSIX capabilities type
string
Capability represent POSIX capabilities type
boolean
Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.
string
procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.
boolean
Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
object
The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
object
StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
boolean
Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
boolean
Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
string
Optional: Path at which the file to which the container’s termination message will be written is mounted into the container’s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
string
Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
boolean
Whether this container should allocate a TTY for itself, also requires ‘stdin’ to be true. Default is false.
array
volumeDevices is the list of block devices to be used by the container.
object
volumeDevice describes a mapping of a raw block device within a container.
string
Required
devicePath is the path inside of the container that the device will be mapped to.
string
Required
name must match the name of a persistentVolumeClaim in the pod
array
Pod volumes to mount into the container’s filesystem. Cannot be updated.
object
VolumeMount describes a mounting of a Volume within a container.
string
Required
Path within the container at which the volume should be mounted. Must not contain ‘:’.
string
mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
string
Required
This must match the Name of a Volume.
boolean
Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
string
Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).
string
Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.
string
Container’s working directory. If not specified, the container runtime’s default will be used, which might be configured in the container image. Cannot be updated.
array
Envs defines environment variables for Kafka broker Pods. Adding the “+” prefix to the name prepends the value to that environment variable instead of overwriting it. Add the “+” suffix to append.
object
EnvVar represents an environment variable present in a Container.
string
Required
Name of the environment variable. Must be a C_IDENTIFIER.
string
Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.
object
Source for the environment variable’s value. Cannot be used if value is not empty.
object
Selects a key of a ConfigMap.
boolean
Specify whether the ConfigMap or its key must be defined
object
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
Selects a key of a secret in the pod’s namespace
string
Required
The key of the secret to select from. Must be a valid secret key.
boolean
Specify whether the Secret or its key must be defined
object
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
array
InitContainers add extra initContainers to the Kafka broker pod
object
A single application container that you want to run within a pod.
array
Arguments to the entrypoint. The container image’s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
array
Entrypoint array. Not executed within a shell. The container image’s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
array
List of environment variables to set in the container. Cannot be updated.
array
List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
object
EnvFromSource represents the source of a set of ConfigMaps
object
The ConfigMap to select from
boolean
Specify whether the ConfigMap must be defined
string
An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
object
The Secret to select from
boolean
Specify whether the Secret must be defined
object
EnvVar represents an environment variable present in a Container.
string
Required
Name of the environment variable. Must be a C_IDENTIFIER.
string
Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.
object
Source for the environment variable’s value. Cannot be used if value is not empty.
object
Selects a key of a ConfigMap.
boolean
Specify whether the ConfigMap or its key must be defined
object
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
Selects a key of a secret in the pod’s namespace
string
Required
The key of the secret to select from. Must be a valid secret key.
boolean
Specify whether the Secret or its key must be defined
object
Actions that the management system should take in response to container lifecycle events. Cannot be updated.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
object
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
object
PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod’s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod’s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
object
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
string
Required
Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
array
List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
object
ContainerPort represents a network port in a single container.
integer
Required
Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.
string
What host IP to bind the external port to.
integer
Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
string
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
string
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
boolean
AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.
object
The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.
string
Capability represent POSIX capabilities type
string
Capability represent POSIX capabilities type
boolean
Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.
string
procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.
boolean
Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
object
The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
object
StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
boolean
Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
boolean
Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
string
Optional: Path at which the file to which the container’s termination message will be written is mounted into the container’s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
string
Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
boolean
Whether this container should allocate a TTY for itself, also requires ‘stdin’ to be true. Default is false.
array
volumeDevices is the list of block devices to be used by the container.
object
volumeDevice describes a mapping of a raw block device within a container.
string
Required
devicePath is the path inside of the container that the device will be mapped to.
string
Required
name must match the name of a persistentVolumeClaim in the pod
array
Pod volumes to mount into the container’s filesystem. Cannot be updated.
object
VolumeMount describes a mounting of a Volume within a container.
string
Required
Path within the container at which the volume should be mounted. Must not contain ‘:’.
string
mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
string
Required
This must match the Name of a Volume.
boolean
Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
string
Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).
string
Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.
string
Container’s working directory. If not specified, the container runtime’s default will be used, which might be configured in the container image. Cannot be updated.
string
Override for the default log4j configuration
object
Network throughput information in kB/s used by Cruise Control to determine broker network capacity. By default it is set to 125000
which means 1Gbit/s in network throughput.
object
External listeners that use NodePort type service to expose the broker outside the Kubernetes clusterT and their external IP to advertise Kafka broker external listener. The external IP value is ignored in case of external listeners that use LoadBalancer type service to expose the broker outside the Kubernetes cluster. Also, when “hostnameOverride” field of the external listener is set it will override the broker’s external listener advertise address according to the description of the “hostnameOverride” field.
string
When “hostNameOverride” and brokerConfig.nodePortExternalIP are empty and NodePort access method is selected for an external listener the NodePortNodeAdddressType defines the Kafka broker’s Kubernetes node’s address type that shall be used in the advertised.listeners property. https://kubernetes.io/docs/concepts/architecture/nodes/#addresses The NodePortNodeAddressType’s possible values can be Hostname, ExternalIP, InternalIP, InternalDNS,ExternalDNS
object
PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.
integer
A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:
1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR’d with rw-rw—-
If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.
string
fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are “OnRootMismatch” and “Always”. If not specified, “Always” is used. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
array
A list of groups applied to the first process run in each container, in addition to the container’s primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
array
Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.
object
Sysctl defines a kernel parameter to be set
string
Required
Name of a property to set
string
Required
Value of a property to set
object
The Windows specific settings applied to all containers. If unspecified, the options within a container’s SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
string
PriorityClassName specifies the priority class name for a broker pod(s). If specified, the PriorityClass resource with this PriorityClassName must be created beforehand. If not specified, the broker pods’ priority is default to zero.
object
ResourceRequirements describes the compute resource requirements.
object
SecurityContext allows to set security context for the kafka container
boolean
AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.
object
The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.
string
Capability represent POSIX capabilities type
string
Capability represent POSIX capabilities type
boolean
Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.
string
procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.
boolean
Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
object
The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
object
StorageConfig defines the broker storage configuration
object
If set https://kubernetes.io/docs/concepts/storage/volumes#emptydir is used as storage for Kafka broker log dirs. The use of empty dir as Kafka broker storage is useful in development environments where data loss is not a concern as data stored on emptydir backed storage is lost at pod restarts. Either pvcSpec
or emptyDir
has to be set. When both pvcSpec
and emptyDir
fields are set the pvcSpec
is used by default.
sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
object
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.
string
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
string
Required
Kind is the type of resource being referenced
string
Required
Name is the name of resource being referenced
object
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.
string
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
string
Required
Kind is the type of resource being referenced
string
Required
Name is the name of resource being referenced
object
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
object
selector is a label query over volumes to consider for binding.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
string
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
string
volumeName is the binding reference to the PersistentVolume backing this claim.
integer
TerminationGracePeriod defines the pod termination grace period
object
The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .
string
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
string
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
string
Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
integer
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
string
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
array
VolumeMounts define some extra Kubernetes VolumeMounts for the Kafka broker Pods.
object
VolumeMount describes a mounting of a Volume within a container.
string
Required
Path within the container at which the volume should be mounted. Must not contain ‘:’.
string
mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
string
Required
This must match the Name of a Volume.
boolean
Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
string
Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).
string
Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.
array
Volumes define some extra Kubernetes Volumes for the Kafka broker Pods.
object
Volume represents a named volume in a pod that may be accessed by any container in the pod.
string
fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine
integer
partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty).
object
azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
string
cachingMode is the Host Caching mode: None, Read Only, Read Write.
string
Required
diskName is the Name of the data disk in the blob storage
string
Required
diskURI is the URI of data disk in the blob storage
string
fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
string
kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared
boolean
readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
azureFile represents an Azure File Service mount on the host and bind mount to the pod.
boolean
readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
string
Required
secretName is the name of secret that contains Azure Storage Account Name and Key
string
Required
shareName is the azure share Name
object
cephFS represents a Ceph FS mount on the host that shares a pod’s lifetime
string
path is Optional: Used as the mounted root, rather than the full Ceph tree, default is /
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
object
secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.
object
configMap represents a configMap that should populate this volume
integer
defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional specify whether the ConfigMap or its keys must be defined
object
csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
string
Required
driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.
string
fsType to mount. Ex. “ext4”, “xfs”, “ntfs”. If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.
object
nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.
boolean
readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
object
volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver’s documentation for supported values.
object
downwardAPI represents downward API about the pod that should populate this volume
integer
Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
Items is a list of downward API volume file
object
DownwardAPIVolumeFile represents information to create the file containing the pod field
object
Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
integer
Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
object
ephemeral represents a volume that is handled by a cluster storage driver. The volume’s lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed.
Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim).
Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod.
Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information.
A pod can use both types of ephemeral volumes and persistent volumes at the same time.
object
Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name>
where <volume name>
is the name from the PodSpec.Volumes
array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).
An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.
This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.
Required, must not be nil.
object
May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.
object
Required
The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.
object
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.
string
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
string
Required
Kind is the type of resource being referenced
string
Required
Name is the name of resource being referenced
object
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.
string
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
string
Required
Kind is the type of resource being referenced
string
Required
Name is the name of resource being referenced
object
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
object
selector is a label query over volumes to consider for binding.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
string
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
string
volumeName is the binding reference to the PersistentVolume backing this claim.
object
fc represents a Fibre Channel resource that is attached to a kubelet’s host machine and then exposed to the pod.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine
integer
lun is Optional: FC target lun number
boolean
readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
array
targetWWNs is Optional: FC target worldwide names (WWNs)
array
wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.
object
flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.
string
Required
driver is the name of the driver to use for this volume.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. The default filesystem depends on FlexVolume script.
object
options is Optional: this field holds extra command options if any.
boolean
readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.
object
flocker represents a Flocker volume attached to a kubelet’s host machine. This depends on the Flocker control service being running
string
datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated
string
datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset
string
fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine
integer
partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
object
gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.
string
directory is the target directory name. Must not contain or start with ‘..’. If ‘.’ is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.
string
revision is the commit hash for the specified revision.
object
hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath — TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.
boolean
chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication
boolean
chapAuthSession defines whether support iSCSI Session CHAP authentication
string
fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine
string
initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection.
string
Required
iqn is the target iSCSI Qualified Name.
string
iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to ‘default’ (tcp).
integer
Required
lun represents iSCSI Target Lun number.
array
portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
boolean
readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.
object
secretRef is the CHAP Secret for iSCSI target and initiator authentication
string
Required
targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
boolean
readOnly Will force the ReadOnly setting in VolumeMounts. Default false.
object
photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
string
Required
pdID is the ID that identifies Photon Controller persistent disk
object
portworxVolume represents a portworx volume attached and mounted on kubelets host machine
string
fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”. Implicitly inferred to be “ext4” if unspecified.
boolean
readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
string
Required
volumeID uniquely identifies a Portworx volume
object
projected items for all in one resources secrets, configmaps, and downward API
integer
defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
sources is the list of volume projections
object
Projection that may be projected along with other supported volume types
object
configMap information about the configMap data to project
array
items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional specify whether the ConfigMap or its keys must be defined
object
downwardAPI information about the downwardAPI data to project
array
Items is a list of DownwardAPIVolume file
object
DownwardAPIVolumeFile represents information to create the file containing the pod field
object
Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
integer
Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
secret information about the secret data to project
array
items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional field specify whether the Secret or its key must be defined
object
serviceAccountToken is information about the serviceAccountToken data to project
string
audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.
integer
expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.
string
Required
path is the path relative to the mount point of the file to project the token into.
object
quobyte represents a Quobyte mount on the host that shares a pod’s lifetime
string
group to map volume access to Default is no group
boolean
readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.
string
Required
registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes
string
tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin
string
user to map volume access to Defaults to serivceaccount user
string
Required
volume is a string that references an already created Quobyte volume by name.
string
fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine
object
scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Default is “xfs”.
string
Required
gateway is the host address of the ScaleIO API Gateway.
string
protectionDomain is the name of the ScaleIO Protection Domain for the configured storage.
boolean
readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
Required
secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.
boolean
sslEnabled Flag enable/disable SSL communication with Gateway, default false
string
storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.
string
storagePool is the ScaleIO Storage Pool associated with the protection domain.
string
Required
system is the name of the storage system as configured in ScaleIO.
string
volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.
integer
defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional field specify whether the Secret or its keys must be defined
object
storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
boolean
readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.
string
volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
string
volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod’s namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to “default” if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.
object
vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine
string
fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
string
storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.
string
storagePolicyName is the storage Policy Based Management (SPBM) profile name.
string
Required
volumePath is the path that identifies vSphere volume vmdk
object
ClientSSLCertSecret is a reference to the Kubernetes secret where custom client SSL certificate can be provided. It will be used by the koperator, cruise control, cruise control metrics reporter to communicate on SSL with that internal listener which is used for interbroker communication. The client certificate must share the same chain of trust as the server certificate used by the corresponding internal listener. The secret must contain the keystore, truststore jks files and the password for them in base64 encoded format under the keystore.jks, truststore.jks, password data fields.
object
Required
CruiseControlConfig defines the config for Cruise Control
object
Affinity is a group of affinity scheduling rules.
object
Describes node affinity scheduling rules for the pod.
array
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
object
An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
object
Required
A node selector term, associated with the corresponding weight.
array
A list of node selector requirements by node’s labels.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
array
A list of node selector requirements by node’s fields.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
integer
Required
Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
object
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
array
Required
Required. A list of node selector terms. The terms are ORed.
object
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
array
A list of node selector requirements by node’s labels.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
array
A list of node selector requirements by node’s fields.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
object
Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
array
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
object
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
object
Required
Required. A pod affinity term, associated with the corresponding weight.
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
integer
Required
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
array
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
object
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
object
Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
array
The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
object
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
object
Required
Required. A pod affinity term, associated with the corresponding weight.
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
integer
Required
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
array
If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
object
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
object
Annotations to be applied to CruiseControl pod
object
CruiseControlOperationSpec specifies the configuration of the CruiseControlOperation handling
integer
When TTLSecondsAfterFinished is specified, the created and finished (completed successfully or completedWithError and errorPolicy: ignore) cruiseControlOperation custom resource will be deleted after the given time elapsed. When it is 0 then the resource is going to be deleted instantly after the operation is finished. When it is not specified the resource is not going to be removed. Value can be only zero and positive integers.
object
CruiseControlTaskSpec specifies the configuration of the CC Tasks
integer
Required
RetryDurationMinutes describes the amount of time the Operator waits for the task
object
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
array
InitContainers add extra initContainers to CruiseControl pod
object
A single application container that you want to run within a pod.
array
Arguments to the entrypoint. The container image’s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
array
Entrypoint array. Not executed within a shell. The container image’s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
array
List of environment variables to set in the container. Cannot be updated.
array
List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
object
EnvFromSource represents the source of a set of ConfigMaps
object
The ConfigMap to select from
boolean
Specify whether the ConfigMap must be defined
string
An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
object
The Secret to select from
boolean
Specify whether the Secret must be defined
object
EnvVar represents an environment variable present in a Container.
string
Required
Name of the environment variable. Must be a C_IDENTIFIER.
string
Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.
object
Source for the environment variable’s value. Cannot be used if value is not empty.
object
Selects a key of a ConfigMap.
boolean
Specify whether the ConfigMap or its key must be defined
object
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
Selects a key of a secret in the pod’s namespace
string
Required
The key of the secret to select from. Must be a valid secret key.
boolean
Specify whether the Secret or its key must be defined
object
Actions that the management system should take in response to container lifecycle events. Cannot be updated.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
object
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
object
PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod’s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod’s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
object
Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
string
Required
Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
array
List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default “0.0.0.0” address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.
object
ContainerPort represents a network port in a single container.
integer
Required
Number of port to expose on the pod’s IP address. This must be a valid port number, 0 < x < 65536.
string
What host IP to bind the external port to.
integer
Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
string
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
string
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to “TCP”.
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
boolean
AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.
object
The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.
string
Capability represent POSIX capabilities type
string
Capability represent POSIX capabilities type
boolean
Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.
string
procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.
boolean
Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
object
The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
object
StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
object
Exec specifies the action to take.
array
Command is the command line to execute inside the container, the working directory for the command is root (‘/’) in the container’s filesystem. The command is simply exec’d, it is not run inside a shell, so traditional shell instructions (‘|’, etc) won’t work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
integer
Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
object
GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate.
integer
Required
Port number of the gRPC service. Number must be in the range 1 to 65535.
object
HTTPGet specifies the http request to perform.
string
Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
array
Custom headers to set in the request. HTTP allows repeated headers.
object
HTTPHeader describes a custom header to be used in HTTP probes
string
Path to access on the HTTP server.
Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
string
Scheme to use for connecting to the host. Defaults to HTTP.
integer
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
integer
Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
object
TCPSocket specifies an action involving a TCP port.
string
Optional: Host name to connect to, defaults to the pod IP.
Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
integer
Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod’s terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
boolean
Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
boolean
Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
string
Optional: Path at which the file to which the container’s termination message will be written is mounted into the container’s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
string
Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
boolean
Whether this container should allocate a TTY for itself, also requires ‘stdin’ to be true. Default is false.
array
volumeDevices is the list of block devices to be used by the container.
object
volumeDevice describes a mapping of a raw block device within a container.
string
Required
devicePath is the path inside of the container that the device will be mapped to.
string
Required
name must match the name of a persistentVolumeClaim in the pod
array
Pod volumes to mount into the container’s filesystem. Cannot be updated.
object
VolumeMount describes a mounting of a Volume within a container.
string
Required
Path within the container at which the volume should be mounted. Must not contain ‘:’.
string
mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
string
Required
This must match the Name of a Volume.
boolean
Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
string
Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).
string
Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.
string
Container’s working directory. If not specified, the container runtime’s default will be used, which might be configured in the container image. Cannot be updated.
object
PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.
integer
A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:
1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR’d with rw-rw—-
If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.
string
fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are “OnRootMismatch” and “Always”. If not specified, “Always” is used. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
array
A list of groups applied to the first process run in each container, in addition to the container’s primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
array
Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.
object
Sysctl defines a kernel parameter to be set
string
Required
Name of a property to set
string
Required
Value of a property to set
object
The Windows specific settings applied to all containers. If unspecified, the options within a container’s SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
string
PriorityClassName specifies the priority class name for the CruiseControl pod. If specified, the PriorityClass resource with this PriorityClassName must be created beforehand. If not specified, the CruiseControl pod’s priority is default to zero.
object
ResourceRequirements describes the compute resource requirements.
object
SecurityContext allows to set security context for the CruiseControl container
boolean
AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.
object
The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.
string
Capability represent POSIX capabilities type
string
Capability represent POSIX capabilities type
boolean
Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.
string
procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.
boolean
Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
object
The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
object
The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .
string
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
string
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
string
Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
integer
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
string
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
object
TopicConfig holds info for topic configuration regarding partitions and replicationFactor
array
VolumeMounts define some extra Kubernetes Volume mounts for the CruiseControl Pods.
object
VolumeMount describes a mounting of a Volume within a container.
string
Required
Path within the container at which the volume should be mounted. Must not contain ‘:’.
string
mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
string
Required
This must match the Name of a Volume.
boolean
Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
string
Path within the volume from which the container’s volume should be mounted. Defaults to “” (volume’s root).
string
Expanded path within the volume from which the container’s volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container’s environment. Defaults to “” (volume’s root). SubPathExpr and SubPath are mutually exclusive.
array
Volumes define some extra Kubernetes Volumes for the CruiseControl Pods.
object
Volume represents a named volume in a pod that may be accessed by any container in the pod.
string
fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine
integer
partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty).
object
azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
string
cachingMode is the Host Caching mode: None, Read Only, Read Write.
string
Required
diskName is the Name of the data disk in the blob storage
string
Required
diskURI is the URI of data disk in the blob storage
string
fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
string
kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared
boolean
readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
azureFile represents an Azure File Service mount on the host and bind mount to the pod.
boolean
readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
string
Required
secretName is the name of secret that contains Azure Storage Account Name and Key
string
Required
shareName is the azure share Name
object
cephFS represents a Ceph FS mount on the host that shares a pod’s lifetime
string
path is Optional: Used as the mounted root, rather than the full Ceph tree, default is /
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
object
secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.
object
configMap represents a configMap that should populate this volume
integer
defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional specify whether the ConfigMap or its keys must be defined
object
csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
string
Required
driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.
string
fsType to mount. Ex. “ext4”, “xfs”, “ntfs”. If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.
object
nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.
boolean
readOnly specifies a read-only configuration for the volume. Defaults to false (read/write).
object
volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver’s documentation for supported values.
object
downwardAPI represents downward API about the pod that should populate this volume
integer
Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
Items is a list of downward API volume file
object
DownwardAPIVolumeFile represents information to create the file containing the pod field
object
Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
integer
Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
object
ephemeral represents a volume that is handled by a cluster storage driver. The volume’s lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed.
Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim).
Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod.
Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information.
A pod can use both types of ephemeral volumes and persistent volumes at the same time.
object
Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name>
where <volume name>
is the name from the PodSpec.Volumes
array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).
An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.
This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.
Required, must not be nil.
object
May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.
object
Required
The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.
object
dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.
string
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
string
Required
Kind is the type of resource being referenced
string
Required
Name is the name of resource being referenced
object
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.
string
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
string
Required
Kind is the type of resource being referenced
string
Required
Name is the name of resource being referenced
object
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
object
selector is a label query over volumes to consider for binding.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
string
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
string
volumeName is the binding reference to the PersistentVolume backing this claim.
object
fc represents a Fibre Channel resource that is attached to a kubelet’s host machine and then exposed to the pod.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine
integer
lun is Optional: FC target lun number
boolean
readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
array
targetWWNs is Optional: FC target worldwide names (WWNs)
array
wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.
object
flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.
string
Required
driver is the name of the driver to use for this volume.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. The default filesystem depends on FlexVolume script.
object
options is Optional: this field holds extra command options if any.
boolean
readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.
object
flocker represents a Flocker volume attached to a kubelet’s host machine. This depends on the Flocker control service being running
string
datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated
string
datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset
string
fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine
integer
partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as “1”. Similarly, the volume partition for /dev/sda is “0” (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
object
gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container.
string
directory is the target directory name. Must not contain or start with ‘..’. If ‘.’ is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.
string
revision is the commit hash for the specified revision.
object
hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath — TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.
boolean
chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication
boolean
chapAuthSession defines whether support iSCSI Session CHAP authentication
string
fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine
string
initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection.
string
Required
iqn is the target iSCSI Qualified Name.
string
iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to ‘default’ (tcp).
integer
Required
lun represents iSCSI Target Lun number.
array
portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
boolean
readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.
object
secretRef is the CHAP Secret for iSCSI target and initiator authentication
string
Required
targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
boolean
readOnly Will force the ReadOnly setting in VolumeMounts. Default false.
object
photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
string
Required
pdID is the ID that identifies Photon Controller persistent disk
object
portworxVolume represents a portworx volume attached and mounted on kubelets host machine
string
fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”. Implicitly inferred to be “ext4” if unspecified.
boolean
readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
string
Required
volumeID uniquely identifies a Portworx volume
object
projected items for all in one resources secrets, configmaps, and downward API
integer
defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
sources is the list of volume projections
object
Projection that may be projected along with other supported volume types
object
configMap information about the configMap data to project
array
items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional specify whether the ConfigMap or its keys must be defined
object
downwardAPI information about the downwardAPI data to project
array
Items is a list of DownwardAPIVolume file
object
DownwardAPIVolumeFile represents information to create the file containing the pod field
object
Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
integer
Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ‘..’ path. Must be utf-8 encoded. The first item of the relative path must not start with ‘..’
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
secret information about the secret data to project
array
items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional field specify whether the Secret or its key must be defined
object
serviceAccountToken is information about the serviceAccountToken data to project
string
audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.
integer
expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.
string
Required
path is the path relative to the mount point of the file to project the token into.
object
quobyte represents a Quobyte mount on the host that shares a pod’s lifetime
string
group to map volume access to Default is no group
boolean
readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.
string
Required
registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes
string
tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin
string
user to map volume access to Defaults to serivceaccount user
string
Required
volume is a string that references an already created Quobyte volume by name.
string
fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine
object
scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Default is “xfs”.
string
Required
gateway is the host address of the ScaleIO API Gateway.
string
protectionDomain is the name of the ScaleIO Protection Domain for the configured storage.
boolean
readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
Required
secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.
boolean
sslEnabled Flag enable/disable SSL communication with Gateway, default false
string
storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.
string
storagePool is the ScaleIO Storage Pool associated with the protection domain.
string
Required
system is the name of the storage system as configured in ScaleIO.
string
volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.
integer
defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
array
items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the ‘..’ path or start with ‘..’.
object
Maps a string key to a path within a volume.
string
Required
key is the key to project.
integer
mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
string
Required
path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element ‘..’. May not start with the string ‘..’.
boolean
optional field specify whether the Secret or its keys must be defined
object
storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.
string
fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
boolean
readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
object
secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.
string
volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
string
volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod’s namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to “default” if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.
object
vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine
string
fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. “ext4”, “xfs”, “ntfs”. Implicitly inferred to be “ext4” if unspecified.
string
storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.
string
storagePolicyName is the storage Policy Based Management (SPBM) profile name.
string
Required
volumePath is the path that identifies vSphere volume vmdk
object
DisruptionBudget defines the configuration for PodDisruptionBudget where the workload is managed by the kafka-operator
string
The budget to set for the PDB, can either be static number or a percentage
boolean
If set to true, will create a podDisruptionBudget
object
EnvoyConfig defines the config for Envoy
object
Affinity is a group of affinity scheduling rules.
object
Describes node affinity scheduling rules for the pod.
array
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
object
An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it’s a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).
object
Required
A node selector term, associated with the corresponding weight.
array
A list of node selector requirements by node’s labels.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
array
A list of node selector requirements by node’s fields.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
integer
Required
Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
object
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
array
Required
Required. A list of node selector terms. The terms are ORed.
object
A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.
array
A list of node selector requirements by node’s labels.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
array
A list of node selector requirements by node’s fields.
object
A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
The label key that the selector applies to.
string
Required
Represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
array
An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
object
Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
array
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
object
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
object
Required
Required. A pod affinity term, associated with the corresponding weight.
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
integer
Required
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
array
If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
object
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
object
Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
array
The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding “weight” to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
object
The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
object
Required
Required. A pod affinity term, associated with the corresponding weight.
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
integer
Required
weight associated with matching the corresponding podAffinityTerm, in the range 1-100.
array
If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
object
Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running
object
A label query over a set of resources, in this case pods.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
object
A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means “this pod’s namespace”. An empty selector ({}) matches all namespaces.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means “this pod’s namespace”.
string
Required
This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
object
Annotations defines the annotations placed on the envoy ingress controller deployment
object
DisruptionBudget is the pod disruption budget attached to Envoy Deployment(s)
string
The budget to set for the PDB, can either be static number or a percentage
boolean
If set to true, will create a podDisruptionBudget
string
The strategy to be used, either minAvailable or maxUnavailable
boolean
EnableHealthCheckHttp10 is a toggle for adding HTTP1.0 support to Envoy health-check, default false
object
Envoy command line arguments
array
ImagePullSecrets for the envoy image pull
object
LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
string
LoadBalancerIP can be used to specify an exact IP for the LoadBalancer service
object
NodeSelector is the node selector expression for envoy pods
object
PodSecurityContext holds pod-level security attributes and common container settings for the Envoy pods.
integer
A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:
1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR’d with rw-rw—-
If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.
string
fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are “OnRootMismatch” and “Always”. If not specified, “Always” is used. Note that this field cannot be set when spec.os.name is windows.
integer
The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
boolean
Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
integer
The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
object
The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
string
Level is SELinux level label that applies to the container.
string
Role is a SELinux role label that applies to the container.
string
Type is a SELinux type label that applies to the container.
string
User is a SELinux user label that applies to the container.
object
The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.
string
localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet’s configured seccomp profile location. Must only be set if type is “Localhost”.
string
Required
type indicates which kind of seccomp profile will be applied. Valid options are:
Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.
array
A list of groups applied to the first process run in each container, in addition to the container’s primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
array
Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.
object
Sysctl defines a kernel parameter to be set
string
Required
Name of a property to set
string
Required
Value of a property to set
object
The Windows specific settings applied to all containers. If unspecified, the options within a container’s SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
string
GMSACredentialSpecName is the name of the GMSA credential spec to use.
boolean
HostProcess determines if a container should be run as a ‘Host Process’ container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod’s containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
string
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
string
PriorityClassName specifies the priority class name for the Envoy pod(s) If specified, the PriorityClass resource with this PriorityClassName must be created beforehand If not specified, the Envoy pods’ priority is default to zero
object
ResourceRequirements describes the compute resource requirements.
string
ServiceAccountName is the name of service account
object
The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .
string
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
string
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
string
Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
integer
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
string
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
object
TopologySpreadConstraint specifies how to spread matching pods among the given topology.
object
LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
array
matchExpressions is a list of label selector requirements. The requirements are ANDed.
object
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
string
Required
key is the label key that the selector applies to.
string
Required
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
array
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
object
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
array
MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don’t exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.
integer
Required
MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule
, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway
, it is used to give higher precedence to topologies that satisfy it. It’s a required field. Default value is 1 and 0 is not allowed.
integer
MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats “global minimum” as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won’t schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so “global minimum” is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.
This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).
string
NodeAffinityPolicy indicates how we will treat Pod’s nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
string
NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
string
Required
TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each as a “bucket”, and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is “kubernetes.io/hostname”, each Node is a domain of that topology. And, if TopologyKey is “topology.kubernetes.io/zone”, each zone is a domain of that topology. It’s a required field.
string
Required
WhenUnsatisfiable indicates how to deal with a pod if it doesn’t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered “Unsatisfiable” for an incoming pod if and only if every possible node assignment for that pod would violate “MaxSkew” on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won’t make it more imbalanced. It’s a required field.
array
Envs defines environment variables for Kafka broker Pods. Adding the “+” prefix to the name prepends the value to that environment variable instead of overwriting it. Add the “+” suffix to append.
object
EnvVar represents an environment variable present in a Container.
string
Required
Name of the environment variable. Must be a C_IDENTIFIER.
string
Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.
object
Source for the environment variable’s value. Cannot be used if value is not empty.
object
Selects a key of a ConfigMap.
boolean
Specify whether the ConfigMap or its key must be defined
object
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
Selects a key of a secret in the pod’s namespace
string
Required
The key of the secret to select from. Must be a valid secret key.
boolean
Specify whether the Secret or its key must be defined
string
IngressController specifies the type of the ingress controller to be used for external listeners. The istioingress
ingress controller type requires the spec.istioControlPlane
field to be populated as well.
object
IstioControlPlane is a reference to the IstioControlPlane resource for envoy configuration. It must be specified if istio ingress is used.
object
IstioIngressConfig defines the config for the Istio Ingress Controller
object
Annotations defines the annotations placed on the istio ingress controller deployment
array
Envs allows to add additional env vars to the istio meshgateway resource
object
EnvVar represents an environment variable present in a Container.
string
Required
Name of the environment variable. Must be a C_IDENTIFIER.
string
Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to “”.
object
Source for the environment variable’s value. Cannot be used if value is not empty.
object
Selects a key of a ConfigMap.
boolean
Specify whether the ConfigMap or its key must be defined
object
Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>']
, metadata.annotations['<KEY>']
, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
string
Version of the schema the FieldPath is written in terms of, defaults to “v1”.
string
Required
Path of the field to select in the specified API version.
object
Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
string
Container name: required for volumes, optional for env vars
Specifies the output format of the exposed resources, defaults to “1”
string
Required
Required: resource to select
object
Selects a key of a secret in the pod’s namespace
string
Required
The key of the secret to select from. Must be a valid secret key.
boolean
Specify whether the Secret or its key must be defined
string
REQUIRED if mode is MUTUAL
. The path to a file containing certificate authority certificates to use in verifying a presented client side certificate.
array
Optional: If specified, only support the specified cipher list. Otherwise default to the default cipher list supported by Envoy.
string
The credentialName stands for a unique identifier that can be used to identify the serverCertificate and the privateKey. The credentialName appended with suffix “-cacert” is used to identify the CaCertificates associated with this server. Gateway workloads capable of fetching credentials from a remote credential store such as Kubernetes secrets, will be configured to retrieve the serverCertificate and the privateKey using credentialName, instead of using the file system paths specified above. If using mutual TLS, gateway workload instances will retrieve the CaCertificates using credentialName-cacert. The semantics of the name are platform dependent. In Kubernetes, the default Istio supplied credential server expects the credentialName to match the name of the Kubernetes secret that holds the server certificate, the private key, and the CA certificate (if using mutual TLS). Set the ISTIO_META_USER_SDS
metadata variable in the gateway’s proxy to enable the dynamic credential fetching feature.
boolean
If set to true, the load balancer will send a 301 redirect for all http connections, asking the clients to use HTTPS.
string
Optional: Maximum TLS protocol version.
string
Optional: Minimum TLS protocol version.
string
Optional: Indicates whether connections to this port should be secured using TLS. The value of this field determines how TLS is enforced.
string
REQUIRED if mode is SIMPLE
or MUTUAL
. The path to the file holding the server’s private key.
string
REQUIRED if mode is SIMPLE
or MUTUAL
. The path to the file holding the server-side TLS certificate to use.
array
A list of alternate names to verify the subject identity in the certificate presented by the client.
array
An optional list of hex-encoded SHA-256 hashes of the authorized client certificates. Both simple and colon separated formats are acceptable. Note: When both verify_certificate_hash and verify_certificate_spki are specified, a hash matching either value will result in the certificate being accepted.
array
An optional list of base64-encoded SHA-256 hashes of the SKPIs of authorized client certificates. Note: When both verify_certificate_hash and verify_certificate_spki are specified, a hash matching either value will result in the certificate being accepted.
object
ResourceRequirements describes the compute resource requirements.
object
The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .
string
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
string
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
string
Operator represents a key’s relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
integer
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
string
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
object
Required
ListenersConfig defines the Kafka listener types
object
ExternalListenerConfig defines the external listener config for Kafka
string
accessMethod defines the method which the external listener is exposed through. Two types are supported LoadBalancer and NodePort. The recommended and default is the LoadBalancer. NodePort should be used in Kubernetes environments with no support for provisioning Load Balancers.
integer
configuring AnyCastPort allows kafka cluster access without specifying the exact broker
object
Config allows to specify ingress controller configuration per external listener if set overrides the the default KafkaClusterSpec.IstioIngressConfig
or KafkaClusterSpec.EnvoyConfig
for this external listener.
integer
Required
externalStartingPort is added to each broker ID to get the port number that will be used for external access to the broker. The choice of broker ID and externalStartingPort must satisfy 0 < broker ID + externalStartingPort <= 65535 If accessMethod is Nodeport and externalStartingPort is set to 0 then the broker IDs are not added and the Nodeport port numbers will be chosen automatically by the K8s Service controller
string
externalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. “Local” preserves the client source IP and avoids a second hop for LoadBalancer and Nodeport type services, but risks potentially imbalanced traffic spreading. “Cluster” obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading.
string
In case of external listeners using LoadBalancer access method the value of this field is used to advertise the Kafka broker external listener instead of the public IP of the provisioned LoadBalancer service (e.g. can be used to advertise the listener using a URL recorded in DNS instead of public IP). In case of external listeners using NodePort access method the broker instead of node public IP (see “brokerConfig.nodePortExternalIP”) is advertised on the address having the following format: -.
object
ServerSSLCertSecret is a reference to the Kubernetes secret that contains the server certificate for the listener to be used for SSL communication. The secret must contain the keystore, truststore jks files and the password for them in base64 encoded format under the keystore.jks, truststore.jks, password data fields. If this field is omitted koperator will auto-create a self-signed server certificate using the configuration provided in ‘sslSecrets’ field.
object
ServiceAnnotations defines annotations which will be placed to the service or services created for the external listener
string
Service Type string describes ingress methods for a service Only “NodePort” and “LoadBalancer” is supported. Default value is LoadBalancer
string
SSLClientAuth specifies whether client authentication is required, requested, or not required. This field defaults to “required” if it is omitted
string
Required
SecurityProtocol is the protocol used to communicate with brokers. Valid values are: plaintext, ssl, sasl_plaintext, sasl_ssl.
object
InternalListenerConfig defines the internal listener config for Kafka
object
ServerSSLCertSecret is a reference to the Kubernetes secret that contains the server certificate for the listener to be used for SSL communication. The secret must contain the keystore, truststore jks files and the password for them in base64 encoded format under the keystore.jks, truststore.jks, password data fields. If this field is omitted koperator will auto-create a self-signed server certificate using the configuration provided in ‘sslSecrets’ field.
string
SSLClientAuth specifies whether client authentication is required, requested, or not required. This field defaults to “required” if it is omitted
string
Required
SecurityProtocol is the protocol used to communicate with brokers. Valid values are: plaintext, ssl, sasl_plaintext, sasl_ssl.
object
SSLSecrets defines the Kafka SSL secrets
object
ObjectReference is a reference to an object with a given name, kind and group.
string
Group of the resource being referred to.
string
Kind of the resource being referred to.
string
Required
Name of the resource being referred to.
string
PKIBackend represents an interface implementing the PKIManager
object
MonitoringConfig defines the config for monitoring Kafka and Cruise Control
boolean
Required
If true OneBrokerPerNode ensures that each kafka broker will be placed on a different node unless a custom Affinity definition overrides this behavior
object
RackAwareness defines the required fields to enable kafka’s rack aware feature
boolean
RemoveUnusedIngressResources when true, the unnecessary resources from the previous ingress state will be removed. when false, they will be kept so the Kafka cluster remains available for those Kafka clients which are still using the previous ingress setting.
object
Required
RollingUpgradeConfig defines the desired config of the RollingUpgrade
integer
Required
FailureThreshold controls how many failures the cluster can tolerate during a rolling upgrade. Once the number of failures reaches this threshold a rolling upgrade flow stops. The number of failures is computed as the sum of distinct broker replicas with either offline replicas or out of sync replicas and the number of alerts triggered by alerts with ‘rollingupgrade’
array
Required
ZKAddresses specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server.
string
ZKPath specifies the ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace.
object
KafkaClusterStatus defines the observed state of KafkaCluster
string
CruiseControlTopicStatus holds info about the CC topic status
object
ListenerStatuses holds information about the statuses of the configured listeners. The internal and external listeners are stored in separate maps, and each listener can be looked up by name.
object
RollingUpgradeStatus defines status of rolling upgrade
integer
Required
ErrorCount keeps track the number of errors reported by alerts labeled with ‘rollingupgrade’. It’s reset once these alerts stop firing.
string
Required
ClusterState holds info about the cluster state
5.1.2 - KafkaTopic CRD schema reference (group kafka.banzaicloud.io)
KafkaTopic is the Schema for the kafkatopics API
KafkaTopic
KafkaTopic is the Schema for the kafkatopics API
- Full name:
- kafkatopics.kafka.banzaicloud.io
- Group:
- kafka.banzaicloud.io
- Singular name:
- kafkatopic
- Plural name:
- kafkatopics
- Scope:
- Namespaced
- Versions:
- v1alpha1
Version v1alpha1
Properties
object
KafkaTopicSpec defines the desired state of KafkaTopic
object
Required
ClusterReference states a reference to a cluster for topic/user provisioning
integer
Required
Partitions defines the desired number of partitions; must be positive, or -1 to signify using the broker’s default
integer
Required
ReplicationFactor defines the desired replication factor; must be positive, or -1 to signify using the broker’s default
object
KafkaTopicStatus defines the observed state of KafkaTopic
string
Required
ManagedBy describes who is the manager of the Kafka topic. When its value is not “koperator” then modifications to the topic configurations of the KafkaTopic CR will not be propagated to the Kafka topic. Manager of the Kafka topic can be changed by adding the “managedBy: ” annotation to the KafkaTopic CR.
string
Required
TopicState defines the state of a KafkaTopic
5.1.3 - KafkaUser CRD schema reference (group kafka.banzaicloud.io)
KafkaUser is the Schema for the kafka users API
KafkaUser
KafkaUser is the Schema for the kafka users API
- Full name:
- kafkausers.kafka.banzaicloud.io
- Group:
- kafka.banzaicloud.io
- Singular name:
- kafkauser
- Plural name:
- kafkausers
- Scope:
- Namespaced
- Versions:
- v1alpha1
Version v1alpha1
Properties
object
KafkaUserSpec defines the desired state of KafkaUser
object
Annotations defines the annotations placed on the certificate or certificate signing request object
object
Required
ClusterReference states a reference to a cluster for topic/user provisioning
object
ObjectReference is a reference to an object with a given name, kind and group.
string
Group of the resource being referred to.
string
Kind of the resource being referred to.
string
Required
Name of the resource being referred to.
string
SignerName indicates requested signer, and is a qualified name.
object
UserTopicGrant is the desired permissions for the KafkaUser
string
Required
KafkaAccessType hold info about Kafka ACL
string
KafkaPatternType hold the Resource Pattern Type of kafka ACL
object
KafkaUserStatus defines the observed state of KafkaUser
string
Required
UserState defines the state of a KafkaUser
5.2 - KafkaCluster CR Examples
The following KafkaCluster custom resource examples show you some basic use cases.
You can use these examples as a base for your own Kafka cluster.
KafkaCluster CR with detailed explanation
This is our most descriptive KafkaCluster CR. You can find a lot of valuable explanation about the settings.
Kafka cluster with monitoring
This is a very simple KafkaCluster CR with Prometheus monitoring enabled.
Kafka cluster with ACL, SSL, and rack awareness
You can read more details about rack awareness here.
Kafka cluster with broker configuration
Kafka cluster with custom SSL certificates for external listeners
You can specify custom SSL certificates for listeners.
For details about SSL configuration, see Securing Kafka With SSL.
Kafka cluster with SASL
You can use SASL authentication on the listeners.
For details, see Expose the Kafka cluster to external applications.
Kafka cluster with load balancers and brokers in the same availability zone
You can create a broker-ingress mapping to eliminate traffic across availability zones between load balancers and brokers by configuring load balancers for brokers in same availability zone.
Kafka cluster with Istio
You can use Istio as the ingress controller for your external listeners. It requires using our Istio operator in the Kubernetes cluster.
Kafka cluster with custom advertised address for external listeners and brokers
You can set custom advertised IP address for brokers.
This is useful when you’re advertising the brokers on an IP address different from the Kubernetes node IP address.
You can also set custom advertised address for external listeners.
For details, see Expose the Kafka cluster to external applications.
Kafka cluster with Kubernetes scheduler affinity settings
You can set node affinity for your brokers.
Kafka cluster with custom storage class
You can configure your brokers to use custom storage classes.
6 - Provisioning Kafka Topics
Create topic
You can create Kafka topics either:
- directly against the cluster with command line utilities, or
- via the
KafkaTopic
CRD.
Below is an example KafkaTopic
CR you can apply with kubectl.
For a full list of configuration options, see the official Kafka documentation.
Update topic
If you want to update the configuration of the topic after it’s been created, you can either:
- edit the manifest and run
kubectl apply
again, or - run
kubectl edit -n kafka kafkatopic example-topic
and then update the configuration in the editor that gets spawned.
You can increase the partition count for a topic the same way, or by running the following one-liner using patch
:
kubectl patch -n kafka kafkatopic example-topic --patch '{"spec": {"partitions": 5}}' --type=merge
kafkatopic.kafka.banzaicloud.io/example-topic patched
Note: Topics created by the Koperator are not enforced in any way. From the Kubernetes perspective, Kafka Topics are external resources.
7 - Kafka clusters with pre-provisioned volumes
This guide describes how to configure KafkaCluster
to deploy Apache Kafka clusters which use pre-provisioned volumes instead of dynamically provisioned ones. Using static volumes is useful in environments where dynamic volume provisioning is not supported. Koperator uses persistent volume claim Kubernetes resources to dynamically provision volumes for the Kafka broker log directories.
Kubernetes provides a feature which allows binding persistent volume claims to existing persistent volumes either through the volumeName
or the selector
field. This allows Koperator to use pre-created persistent volumes as Kafka broker log directories instead of dynamically provisioning persistent volumes. For this binding to work:
- the configuration fields specified under
storageConfigs.pvcSpec
(such as accessModes
, storageClassName
) must match the specification of the pre-created persistent volume, and - the
resources.requests.storage
must fit onto the capacity of the persistent volume.
For further details on how the persistent volume claim binding works, consult the Kubernetes documentation.
In the following example, it is assumed that you (or your administrator) have already created four persistent volumes. The example shows you how to create a Kafka cluster with two brokers, each broker configured with two log directories to use the four pre-provisioned volumes:
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
namespace: kafka # namespace of the kafka cluster this volume is for in case there are multiple kafka clusters with the same name in different namespaces
kafka_cr: kafka # name of the kafka cluster this volume is for
brokerId: "0" # the id of the broker this volume is for
mountPath: kafka-logs-1 # path mounted as broker log dir
spec:
...
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: my-storage-class
...
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
namespace: kafka # namespace of the kafka cluster this volume is for in case there are multiple kafka clusters with the same name in different namespaces
kafka_cr: kafka # name of the kafka cluster this volume is for
brokerId: "0" # the id of the broker this volume is for
mountPath: kafka-logs-2 # path mounted as broker log dir
spec:
...
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: my-storage-class
...
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
namespace: kafka # namespace of the kafka cluster this volume is for in case there are multiple kafka clusters with the same name in different namespaces
kafka_cr: kafka # name of the kafka cluster this volume is for
brokerId: "1" # the id of the broker this volume is for
mountPath: kafka-logs-1 # path mounted as broker log dir
spec:
...
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: my-storage-class
...
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
namespace: kafka # namespace of the kafka cluster this volume is for in case there are multiple kafka clusters with the same name in different namespaces
kafka_cr: kafka # name of the kafka cluster this volume is for
brokerId: "1" # the id of the broker this volume is for
mountPath: kafka-logs-2 # path mounted as broker log dir
spec:
...
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: my-storage-class
...
Broker-level storage configuration to use pre-provisioned volumes
The storageConfigs
specified at the broker level to use the above described pre-created persistent volumes as broker log dirs:
apiVersion: kafka.banzaicloud.io/v1beta1
kind: KafkaCluster
metadata:
namespace: kafka
name: kafka
spec:
...
brokers:
- id: 0
brokerConfigGroup: default
brokerConfig:
storageConfigs:
- mountPath: /kafka-logs-1
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: "0"
# strip '/' from mount path as label selector values
# has to start with an alphanumeric character': https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
mountPath: '{{ trimPrefix "/" .MountPath }}'
storageClassName: my-storage-class
- mountPath: /kafka-logs-2
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: "0"
# strip '/' from mount path as label selector values
# has to start with an alphanumeric character': https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
mountPath: '{{ trimPrefix "/" .MountPath }}'
storageClassName: my-storage-class
- id: 1
brokerConfigGroup: default
brokerConfig:
storageConfigs:
- mountPath: /kafka-logs-1
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: "1"
# strip '/' from mount path as label selector values
# has to start with an alphanumeric character': https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
mountPath: '{{ trimPrefix "/" .MountPath }}'
storageClassName: my-storage-class
- mountPath: /kafka-logs-2
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: "1"
# strip '/' from mount path as label selector values
# has to start with an alphanumeric character': https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
mountPath: '{{ trimPrefix "/" .MountPath }}'
storageClassName: my-storage-class
Broker configuration group level storage config to use pre-provisioned volumes
The storageConfigs
specified at the broker configuration group level to use the above described pre-created persistent volumes as broker log dirs:
apiVersion: kafka.banzaicloud.io/v1beta1
kind: KafkaCluster
metadata:
namespace: kafka
name: kafka
spec:
brokerConfigGroups:
default:
storageConfigs:
- mountPath: /kafka-logs-1
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: '{{ .BrokerId }}'
# strip '/' from mount path as label selector values
# has to start with an alphanumeric character': https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
mountPath: '{{ trimPrefix "/" .MountPath }}'
storageClassName: my-storage-class
- mountPath: /kafka-logs-2
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: '{{ .BrokerId }}'
# strip '/' from mount path as label selector values
# has to start with an alphanumeric character': https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
mountPath: '{{ trimPrefix "/" .MountPath }}'
storageClassName: my-storage-class
- mountPath: /mountpath/that/exceeds63characters/kafka-logs-123456789123456789
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector: # bind to pre-provisioned persistent volumes by labels
matchLabels:
namespace: kafka
kafka_cr: kafka
brokerId: '{{ .BrokerId }}'
# use sha1sum of mountPath to not exceed the 63 char limit for label selector values
mountPath: '{{ .MountPath | sha1sum }}'
storageClassName: my-storage-class
...
Storage config data fields
The following data fields are supported in the storage config:
.BrokerID
- resolves to the current broker’s Id.MountPath
- resolves to the value of the mountPath
field of the current storage config
Under the hood, go-templates
enhanced with Sprig functions are used to resolve these fields to values that allow alterations to the resulting value (for examples, see above the use of trimPrefix
and sha1sum
template functions).
8 - Securing Kafka With SSL
The Koperator makes securing your Apache Kafka cluster with SSL simple.
Enable SSL encryption in Apache Kafka
To create an Apache Kafka cluster which has listener(s) with SSL encryption enabled, you must enable SSL encryption and configure the secrets in the listenersConfig section of your KafkaCluster Custom Resource. You can either provide your own CA certificate and the corresponding private key, or let the operator to create them for you from your cluster configuration. Using sslSecrets, Koperator generates client and server certificates signed using CA. The server certificate is shared across listeners. The client certificate is used by the Koperator, Cruise Control, and Cruise Control Metrics Reporter to communicate Kafka brokers using listener with SSL enabled.
Providing custom certificates per listener is supported from Koperator version 0.21.0+. Having configurations where certain external listeners use user provided certificates while others rely on the auto-generated ones provided by Koperator are also supported. See details below.
Using auto-generated certificates (ssLSecrets)
CAUTION:
After the cluster is created, you cannot change the way the listeners are configured without an outage. If a cluster is created with unencrypted (plain text) listener and you want to switch to SSL encrypted listeners (or the way around), you must manually delete each broker pod. The operator will restart the pods with the new listener configuration.
The following example enables SSL and automatically generates the certificates:
If sslSecrets.create
is false
, the operator will look for the secret at sslSecrets.tlsSecretName
in the namespace of the KafkaCluster custom resource and expect these values:
Key | Value |
caCert | The CA certificate |
caKey | The CA private key |
Using own certificates
Listeners not used for internal broker and controller communication
In this KafkaCluster custom resource, SSL is enabled for all listeners, and certificates are automatically generated for “inner” and “controller” listeners. The “external” and “internal” listeners will use the user-provided certificates. The serverSSLCertSecret key is a reference to the Kubernetes secret that contains the server certificate for the listener to be used for SSL communication.
In the server secret, the following keys must be set:
Key | Value |
keystore.jks | Certificate and private key in JKS format |
truststore.jks | Trusted CA certificate in JKS format |
password | Password for the key and trust store |
The certificates in the listener configuration must be in JKS format.
Listeners used for internal broker or controller communication
In this KafkaCluster custom resource, SSL is enabled for all listeners, and user-provided server certificates. In that case, when a custom certificate is used for a listener which is used for internal broker or controller communication, you must also specify the client certificate. The client certificate will be used by Koperator, Cruise Control, Cruise Control Metrics Reporter to communicate on SSL. The clientSSLCertSecret key is a reference to the Kubernetes secret where the custom client SSL certificate can be provided. The client certificate must be signed by the same CA authority as the server certificate for the corresponding listener. The clientSSLCertSecret has to be in the KafkaCluster custom resource spec field.
The client secret must contain the keystore and truststore JKS files and the password for them in base64 encoded format.
In the server secret the following keys must be set:
Key | Value |
keystore.jks | Certificate and private key in JKS format |
truststore.jks | Trusted CA certificate in JKS format |
password | Password for the key and trust store |
In the client secret the following keys must be set:
Key | Value |
keystore.jks | Certificate and private key in JKS format |
truststore.jks | Trusted CA certificate in JKS format |
password | Password for the key and trust store |
Generate JKS certificate
Certificates in JKS format can be generated using OpenSSL and keystore applications. You can also use this script. The keystore.jks
file must contain only one PrivateKeyEntry.
Kafka listeners use 2-way-SSL mutual authentication, so you must properly set the CNAME (Common Name) fields and if needed the SAN (Subject Alternative Name) fields in the certificates. In the following description we assume that the Kafka cluster is in the kafka
namespace.
-
For the client certificate, CNAME must be “kafka-controller.kafka.mgt.cluster.local” (where .kafka. is the namespace of the kafka cluster).
-
For internal listeners which are exposed by a headless service (kafka-headless), CNAME must be “kafka-headless.kafka.svc.cluster.local”, and the SAN field must contain the following:
- *.kafka-headless.kafka.svc.cluster.local
- kafka-headless.kafka.svc.cluster.local
- *.kafka-headless.kafka.svc
- kafka-headless.kafka.svc
- *.kafka-headless.kafka
- kafka-headless.kafka
- kafka-headless
-
For internal listeners which are exposed by a normal service (kafka-all-broker), CNAME must be “kafka-all-broker.kafka.svc.cluster.local”
-
For external listeners, you need to use the advertised load balancer hostname as CNAME. The hostname need to be specified in the KafkaCluster custom resource with hostnameOverride, and the accessMethod has to be “LoadBalancer”. For details about this override, see Step 5 in Expose cluster using a LoadBalancer.
Using Kafka ACLs with SSL
The Koperator helps you create production-ready Apache Kafka cluster on Kubernetes, with scaling, rebalancing, and alerts based self healing.
If you choose not to enable ACLs for your Apache Kafka cluster, you may still use the KafkaUser
resource to create new certificates for your applications.
You can leave the topicGrants
out as they will not have any effect.
-
To enable ACL support for your Apache Kafka cluster, pass the following configurations along with your brokerConfig
:
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
-
The operator will ensure that cruise control and itself can still access the cluster, however, to create new clients
you will need to generate new certificates signed by the CA, and ensure ACLs on the topic. The operator can automate this process for you using the KafkaUser
CRD.
For example, to create a new producer for the topic test-topic
against the KafkaCluster kafka
, apply the following configuration:
cat << EOF | kubectl apply -n kafka -f -
apiVersion: kafka.banzaicloud.io/v1alpha1
kind: KafkaUser
metadata:
name: example-producer
namespace: kafka
spec:
clusterRef:
name: kafka
secretName: example-producer-secret
topicGrants:
- topicName: test-topic
accessType: write
EOF
This will create a user and store its credentials in the secret example-producer-secret
. The secret contains these fields:
Key | Value |
ca.crt | The CA certificate |
tls.crt | The user certificate |
tls.key | The user private key |
-
You can then mount these secrets to your pod. Alternatively, you can write them to your local machine by running:
kubectl get secret example-producer-secret -o jsonpath="{['data']['ca\.crt']}" | base64 -d > ca.crt
kubectl get secret example-producer-secret -o jsonpath="{['data']['tls\.crt']}" | base64 -d > tls.crt
kubectl get secret example-producer-secret -o jsonpath="{['data']['tls\.key']}" | base64 -d > tls.key
-
To create a consumer for the topic, run this command:
cat << EOF | kubectl apply -n kafka -f -
apiVersion: kafka.banzaicloud.io/v1alpha1
kind: KafkaUser
metadata:
name: example-consumer
namespace: kafka
spec:
clusterRef:
name: kafka
secretName: example-consumer-secret
includeJKS: true
topicGrants:
- topicName: test-topic
accessType: read
EOF
-
The operator can also include a Java keystore format (JKS) with your user secret if you’d like. Add includeJKS: true
to the spec
like shown above, and then the user-secret will gain these additional fields:
Key | Value |
tls.jks | The java keystore containing both the user keys and the CA (use this for your keystore AND truststore) |
pass.txt | The password to decrypt the JKS (this will be randomly generated) |
9 - Koperator capablities
As highlighted in the features section, Koperator removed the reliance on StatefulSet,and supports several different usecases.
Note: This is not a complete list, if you have a specific requirement or question, see our support options.
Vertical capacity scaling
You may have encountered situations where the horizontal scaling of a cluster is impossible. When only one Broker is throttling and needs more CPU or requires additional disks (because it handles the most partitions), a StatefulSet-based solution is useless, since it does not distinguish between replicas’ specifications. The handling of such a case requires unique Broker configurations. If there is a need to add a new disk to a unique Broker, there can be a waste of disk space (and money) with a StatefulSet-based solution, since it can’t add a disk to a specific Broker, the StatefulSet adds one to each replica.
With the Koperator, adding a new disk to any Broker is as easy as changing a CR configuration. Similarly, any Broker-specific configuration can be done on a Broker by Broker basis.
An unhandled error with Broker #1 in a three Broker cluster
In the event of an error with Broker #1, it is ideal to handle it without disrupting the other Brokers. To handle the error you would like to temporarily remove this Broker from the cluster, and fix its state, reconciling the node that serves the node, or maybe reconfigure the Broker using a new configuration. Again, when using StatefulSet, you lose the ability to remove specific Brokers from the cluster. StatefulSet only supports a field name replica that determines how many replicas an application should use. If there’s a downscale/removal, this number can be lowered, however, this means that Kubernetes will remove the most recently added Pod (Broker #3) from the cluster - which, in this case, happens to suit the above purposes quite well.
To remove the #1 Broker from the cluster, you need to lower the number of brokers in the cluster from three to one. This will cause a state in which only one Broker is live, while you kill the brokers that handle traffic. Koperator supports removing specific brokers without disrupting traffic in the cluster.
Fine grained Broker config support
Apache Kafka is a stateful application, where Brokers create/form a cluster with other Brokers. Every Broker is uniquely configurable (Koperator supports heterogenous environments, in which no nodes are the same, act the same or have the same specifications - from the infrastructure up through the Brokers’ Envoy configuration). Kafka has lots of Broker configs, which can be used to fine tune specific brokers, and Koperator did not want to limit these to ALL Brokers in a StatefulSet. Koperator supports unique Broker configs.
In each of the three scenarios listed above, Koperator does not use StatefulSet, relying, instead, on Pods, PVCs and ConfigMaps. While using StatefulSet is a very convenient starting point, as it handles roughly 80% of scenarios, it also introduces huge limitations when running Kafka on Kubernetes in production.
Monitoring based control
Use of monitoring is essential for any application, and all relevant information about Kafka should be published to a monitoring solution. When using Kubernetes, the de facto solution is Prometheus, which supports configuring alerts based on previously consumed metrics. Koperator was built as a standards-based solution (Prometheus and Alert Manager) that could handle and react to alerts automatically, so human operators wouldn’t have to. Koperator supports alert-based Kafka cluster management.
LinkedIn’s Cruise Control
LinkedIn knows how to operate Kafka in a better way. They built a tool, called Cruise Control, to operate their Kafka infrastructure. And Koperator is built to handle the infrastructure, but not to reinvent the wheel in so far as operating Kafka. Koperator was built to leverage the Kubernetes operator pattern and our Kubernetes expertise by handling all Kafka infrastructure related issues in the best possible way. Managing Kafka can be a separate issue, for which there already exist some unique tools and solutions that are standard across the industry, so LinkedIn’s Cruise Control is integrated with the Koperator.
10 - Monitoring Apache Kafka on Kubernetes
This documentation shows you how to enable custom monitoring on an Apache Kafka cluster installed using the Koperator.
Using Helm for Prometheus
By default, the Koperator does not set annotations on the broker pods. To set annotations on the broker pods, specify them in the KafkaCluster CR. Also, you must open port 9020 on brokers and in CruiseControl to enable scraping. For example:
brokerConfigGroups:
default:
brokerAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9020"
...
cruiseControlConfig:
cruiseControlAnnotations:
prometheus.io/port: "9020"
prometheus.io/scrape: "true"
Prometheus must be configured to recognize these annotations. The following example contains the required config.
# Example scrape config for pods
#
# The relabeling allows the actual pod scrape endpoint to be configured via the
# following annotations:
#
# * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
If you are using the provided CR, the operator installs the official jmx exporter for Prometheus.
To change this behavior, modify the following lines at the end of the CR.
monitoringConfig:
jmxImage describes the used prometheus jmx exporter agent container
jmxImage: "banzaicloud/jmx-javaagent:0.15.0"
pathToJar describes the path to the jar file in the given image
pathToJar: "/opt/jmx_exporter/jmx_prometheus_javaagent-0.15.0.jar"
kafkaJMXExporterConfig describes jmx exporter config for Kafka
kafkaJMXExporterConfig: |
lowercaseOutputName: true
Using the ServiceMonitors
To use ServiceMonitors, we recommend to use Kafka with unique service/broker instead of headless service.
Configure the CR the following way:
# Specify if the cluster should use headlessService for Kafka or individual services
# using service/broker may come in handy in case of service mesh
headlessServiceEnabled: false
Disabling Headless service means the operator will set up Kafka with unique services per broker.
Once you have a cluster up and running, create as many ServiceMonitors as brokers.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kafka-0
spec:
selector:
matchLabels:
app: kafka
brokerId: "0"
kafka_cr: kafka
endpoints:
- port: metrics
interval: 10s
11 - Expose the Kafka cluster to external applications
There are two methods to expose your Apache Kafka cluster so that external client applications that run outside the Kubernetes cluster can access it:
The LoadBalancer
method is a convenient way to publish your Kafka cluster, as you don’t have to set up a Load Balancer, provision public IPs, configure routing rules, and so on, since all these are taken care for you. Also, this method has the advantage of reducing your attack surface, since you don’t have to make the Kubernetes cluster’s nodes directly reachable from outside, because incoming external traffic is routed to the nodes of the Kubernetes cluster through the Load Balancer.
The NodePort
method provides access to Kafka for external clients through the external public IP of the nodes of the Kubernetes cluster.
This NodePort
method is a good fit when:
- your Kubernetes distribution or hosting environment does not support Load Balancers,
- business requirements make the extra hops introduced by the Load Balancer and Ingress controller unacceptable, or
- the environment where the Kubernetes cluster is hosted is locked down, and thus the Kubernetes nodes are not reachable through their public IPs from outside.
External listeners
You can expose the Kafka cluster outside the Kubernetes cluster by declaring one or more externalListeners in the KafkaCluster
custom resource. The following externalListeners configuration snippet creates two external access points through which the Kafka cluster’s brokers can be reached. These external listeners are registered in the advertised.listeners
Kafka broker configuration as EXTERNAL1://...,EXTERNAL2://...
.
By default, external listeners use the LoadBalancer access method.
listenersConfig:
externalListeners:
- type: "plaintext"
name: "external1"
externalStartingPort: 19090
containerPort: 9094
# anyCastPort sets which port clients can use to reach all the brokers of the Kafka cluster, default is 29092
# valid range: 0 < x < 65536
# this doesn't have impact if using NodePort to expose the Kafka cluster
anyCastPort: 443
# ingressControllerTargetPort sets which port the ingress controller uses to handle the external client traffic through the "anyCastPort", default is 29092
# valid range: 1023 < x < 65536
# this doesn't have impact if using NodePort to expose the Kafka cluster
# if specified, the ingressControllerTargetPort cannot collide with the reserved envoy ports (if using envoy) and the external broker port numbers
ingressControllerTargetPort: 3000
- type: "plaintext"
name: "external2"
externalStartingPort: 19090
containerPort: 9095
Expose cluster using a LoadBalancer
To configure an external listener that uses the LoadBalancer access method, complete the following steps.
- Edit the
KafkaCluster
custom resource. - Add an
externalListeners
section under listenersConfig
. The following example creates a Load Balancer for the external listener, external1
. Each broker in the cluster receives a dedicated port number on the Load Balancer which is computed as broker port number = externalStartingPort + broker id. This will be registered in each broker’s config as advertised.listeners=EXTERNAL1://<loadbalancer-public-ip>:<broker port number>
.
There are currently two reserved container ports while using Envoy as the ingress controller: 8081 for health-check port, and 8080 for admin port. The external broker port numbers (externalStartingPort + broker id) cannot collide with the reserved envoy ports.
```yaml
listenersConfig:
externalListeners:
- type: "plaintext"
name: "external1"
externalStartingPort: 19090
containerPort: 9094
accessMethod: LoadBalancer
# anyCastPort sets which port clients can use to reach all the brokers of the Kafka cluster, default is 29092
# valid range: 0 < x < 65536
anyCastPort: 443
# ingressControllerTargetPort sets which port the ingress controller uses to handle the external client traffic through the "anyCastPort", default is 29092
# valid range: 1023 < x < 65536
# if specified, the ingressControllerTargetPort cannot collide with the reserved envoy ports (if using envoy) and the external broker port numbers
ingressControllerTargetPort: 3000
```
-
Set the ingress controller. The ingress controllers that are currently supported for load balancing are:
envoy
: uses Envoy proxy as an ingress.istioingress
: uses Istio proxy gateway as an ingress.
Configure the ingress controller you want to use:
-
To use Envoy, set the ingressController
field in the KafkaCluster
custom resource to envoy
. For an example, see.
For OpenShift:
spec:
# ...
envoyConfig:
podSecurityContext:
runAsGroup: 19090
runAsUser: 19090
# ...
ingressController: "envoy"
# ...
For Kubernetes:
spec:
ingressController: "envoy"
-
To use Istio ingress controller set the ingressController
field to istioingress
. Istio operator v2 is supported from Koperator version 0.21.0+. Istio operator v2 supports multiple Istio control plane on the same cluster, that is why the corresponding control plane to the gateway must be specified. The istioControlPlane
field in the KafkaCluster
custom resource is a reference to that IstioControlPlane resource. For an example, see.
spec:
ingressController: "istioingress"
istioControlPlane:
name: <name of the IstioControlPlane custom resource>
namespace: <namespace of the IstioControlPlane custom resource>
-
Configure additional parameters for the ingress controller as needed for your environment, for example, number of replicas, resource requirements and resource limits. You can be configure such parameters using the envoyConfig and istioIngressConfig fields, respectively.
-
(Optional) For external access through a static URL instead of the load balancer’s public IP, specify the URL in the hostnameOverride
field of the external listener that resolves to the public IP of the load balancer. The broker address will be advertised as, advertised.listeners=EXTERNAL1://kafka-1.dev.my.domain:<broker port number>
.
listenersConfig:
externalListeners:
- type: "plaintext"
name: "external1"
externalStartingPort: 19090
containerPort: 9094
accessMethod: LoadBalancer
hostnameOverride: kafka-1.dev.my.domain
-
Apply the KafkaCluster
custom resource to the cluster.
Expose cluster using a NodePort
Using the NodePort access method, external listeners make Kafka brokers accessible through either the external IP of a Kubernetes cluster’s node, or on an external IP that routes into the cluster.
To configure an external listener that uses the NodePort access method, complete the following steps.
-
Edit the KafkaCluster
custom resource.
-
Add an externalListeners
section under listenersConfig
. The following example creates a NodePort type service separately for each broker. Brokers can be reached from outside the Kubernetes cluster at <any node public ip>:<broker port number>
where the <broker port number>
is computed as externalStartingPort + broker id. The externalStartingPort must fall into the range allocated for nodeports on the Kubernetes cluster, which is specified via –service-node-port-range (see the Kubernetes documentation).
listenersConfig:
externalListeners:
- type: "plaintext"
name: "external1"
externalStartingPort: 32000
containerPort: 9094
accessMethod: NodePort
-
(Optional) For external access through a dynamic URL, specify a suffix in the hostnameOverride
field of the external listener:
listenersConfig:
externalListeners:
- type: "plaintext"
name: "external1"
externalStartingPort: 32000
containerPort: 9094
accessMethod: NodePort
hostnameOverride: .dev.example.com
The hostnameOverride
behaves differently here than with LoadBalancer access method. In this case, each broker will be advertised as advertised.listeners=EXTERNAL1://<kafka-cluster-name>-<broker-id>.<external listener name>.<namespace><value-specified-in-hostnameOverride-field>:<broker port number>
. If a three-broker Kafka cluster named kafka is running in the kafka namespace, the advertised.listeners
for the brokers will look like this:
- broker 0:
- advertised.listeners=EXTERNAL1://kafka-0.external1.kafka.dev.my.domain:32000
- broker 1:
- advertised.listeners=EXTERNAL1://kafka-1.external1.kafka.dev.my.domain:32001
- broker 2:
- advertised.listeners=EXTERNAL1://kafka-2.external1.kafka.dev.my.domain:32002
-
Apply the KafkaCluster
custom resource to the cluster.
NodePort external IP
The node IP of the node where the broker pod is scheduled will be used in the advertised.listeners broker configuration when the nodePortNodeAddressType
is specified.
Its value determines which IP or domain name of the Kubernetes node will be used, the possible values are: Hostname, ExternalIP, InternalIP, InternalDNS and ExternalDNS.
The hostNameOverride and nodePortExternalIP must not be specified in this case.
brokers:
- id: 0
brokerConfig:
nodePortNodeAddressType: ExternalIP
- id: 1
brokerConfig:
nodePortNodeAddressType: ExternalIP
- id: 2
brokerConfig:
nodePortNodeAddressType: ExternalIP
If hostnameOverride and nodePortExternalIP fields are not set, then broker address is advertised as follows:
- broker 0:
- advertised.listeners=EXTERNAL1://16.171.47.211:9094
- broker 1:
- advertised.listeners=EXTERNAL1://16.16.66.201:9094
- broker 2:
- advertised.listeners=EXTERNAL1://16.170.214.51:9094
Kafka brokers can be made accessible on external IPs that are not node IP, but can route into the Kubernetes cluster. These external IPs can be set for each broker in the KafkaCluster custom resource as in the following example:
brokers:
- id: 0
brokerConfig:
nodePortExternalIP:
external1: 13.53.214.23 # if "hostnameOverride" is not set for "external1" external listener, then broker is advertised on this IP
- id: 1
brokerConfig:
nodePortExternalIP:
external1: 13.48.71.170 # if "hostnameOverride" is not set for "external1" external listener, then broker is advertised on this IP
- id: 2
brokerConfig:
nodePortExternalIP:
external1: 13.49.70.146 # if "hostnameOverride" is not set for "external1" external listener, then broker is advertised on this IP
If hostnameOverride field is not set, then broker address is advertised as follows:
- broker 0:
- advertised.listeners=EXTERNAL1://13.53.214.23:9094
- broker 1:
- advertised.listeners=EXTERNAL1://13.48.71.170:9094
- broker 2:
- advertised.listeners=EXTERNAL1://13.49.70.146:9094
If both hostnameOverride and nodePortExternalIP fields are set:
- broker 0:
- advertised.listeners=EXTERNAL1://kafka-0.external1.kafka.dev.my.domain:9094
- broker 1:
- advertised.listeners=EXTERNAL1://kafka-1.external1.kafka.dev.my.domain:9094
- broker 2:
- advertised.listeners=EXTERNAL1://kafka-2.external1.kafka.dev.my.domain:9094
Note: If nodePortExternalIP or nodePortNodeAddressType is set, then the containerPort from the external listener config is used as a broker port, and is the same for each broker.
SASL authentication on external listeners
To enable sasl_plaintext authentication on the external listener, modify the externalListeners section of the KafkaCluster CR according to the following example. This will enable an external listener on port 19090.
listenersConfig:
externalListeners:
- config:
defaultIngressConfig: ingress-sasl
ingressConfig:
ingress-sasl:
istioIngressConfig:
gatewayConfig:
credentialName: istio://sds
mode: SIMPLE
containerPort: 9094
externalStartingPort: 19090
name: external
type: sasl_plaintext
To connect to this listener using the Kafka 3.1.0 (and above) console producer, complete the following steps:
-
Set the producer properties like this. Replace the parameters between brackets as needed for your environment:
sasl.mechanism=OAUTHBEARER
security.protocol=SASL_SSL
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
sasl.oauthbearer.token.endpoint.url=<https://myidp.example.com/oauth2/default/v1/token>
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
clientId="<oauth-client-id>" \
clientSecret="<client-secret>" \
scope="kafka:write";
ssl.truststore.location=/ssl/trustore.jks
ssl.truststore.password=truststorepass
ssl.endpoint.identification.algorithm=
-
Run the following command:
kafka-console-producer.sh --bootstrap-server <your-loadbalancer-ip>:19090 --topic <your-topic-name> --producer.config producer.properties
To consume messages from this listener using the Kafka 3.1.0 (and above) console consumer, complete the following steps:
-
Set the consumer properties like this. Replace the parameters between brackets as needed for your environment:
group.id=consumer-1
group.instance.id=consumer-1-instance-1
client.id=consumer-1-instance-1
sasl.mechanism=OAUTHBEARER
security.protocol=SASL_SASL
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
sasl.oauthbearer.token.endpoint.url=<https://myidp.example.com/oauth2/default/v1/token>
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
clientId="<oauth-client-id>" \
clientSecret="<client-secret>" \
scope="kafka:read" ;
ssl.endpoint.identification.algorithm=
ssl.truststore.location=/ssl/trustore.jks
ssl.truststore.password=trustorepass
-
Run the following command:
kafka-console-consumer.sh --bootstrap-server <your-loadbalancer-ip>:19090 --topic <your-topic-name> --consumer.config /opt/kafka/config/consumer.properties --from-beginning
12 - Configure rack awareness
Kafka automatically replicates partitions across brokers, so if a broker fails, the data is safely preserved on another. Kafka’s rack awareness feature spreads replicas of the same partition across different failure groups (racks or availability zones). This extends the guarantees Kafka provides for broker-failure to cover rack and availability zone (AZ) failures, limiting the risk of data loss should all the brokers in the same rack or AZ fail at once.
Note: All brokers deployed by Koperator must belong to the same Kubernetes cluster.
Since rack awareness is so vitally important, especially in multi-region and hybrid-cloud environments, the Koperator provides an automated solution for it, and allows fine-grained broker rack configuration based on pod affinities and anti-affinities.
When well-known Kubernetes labels are available (for example, AZ, node labels, and so on), the Koperator attempts to improve broker resilience by default.
When the broker.rack
configuration option is enabled on the Kafka brokers, Kafka spreads replicas of a partition over different racks. This prevents the loss of data even when an entire rack goes offline at once. According to the official Kafka documentation, “it uses an algorithm which ensures that the number of leaders per broker will be constant, regardless of how brokers are distributed across racks. This ensures balanced throughput.”
Note: The broker.rack
configuration is a read-only config, changing it requires a broker restart.
Enable rack awareness
Enabling rack awareness on a production cluster in a cloud environment is essential, since regions, nodes, and network partitions may vary.
To configure rack awareness, add the following to the spec section of your KafkaCluster CRD.
rackAwareness:
labels:
- "topology.kubernetes.io/region"
- "topology.kubernetes.io/zone"
oneBrokerPerNode: false
- If
oneBrokerPerNode
is set to true
, each broker starts on a new node (that is, literally, one broker per node). If there are not enough nodes for each broker, the broker pod remains in pending
state. - If
oneBrokerPerNode
is set to false
, the operator tries to schedule the brokers to unique nodes, but if the number of nodes is less than the number of brokers, brokers are scheduled to nodes on which a broker is already running.
Most cloud provider-managed Kubernetes clusters have well-known
labels. One well-known label is topology.kubernetes.io/zone. Kubernetes adds this label to the nodes of the cluster and populates the label with zone information from the cloud provider. (If the node is in an on-prem cluster, the operator can also set this label, but it’s not strictly mandatory.)
On clusters which do not have well-known labels, you can set your own labels in the CR to achieve rack awareness.
Note that depending on your use case, you might need additional configuration on your Kafka brokers and clients. For example, to use follower-fetching, you must also set replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector in your KafkaCluster CRD, and set the client.rack option in your client configuration to match the region of your brokers.
Under the hood
As mentioned earlier, broker.rack
is a read-only broker config, so is set whenever the broker starts or restarts. The Koperator holds all its configs within a ConfigMap in each broker.
Getting label values from nodes and using them to generate a ConfigMap is relatively easy, but to determine where the exact broker/pod is scheduled, the operator has to wait until the pod is actually scheduled to a node. Luckily, Kubernetes schedules pods even when a given ConfigMap is unavailable. However, the corresponding pod will remain in a pending state as long as the ConfigMap is not available to mount. The operator makes use of this pending state to gather all the necessary node labels and initialize a ConfigMap with the fetched data. To take advantage of this, we introduced a status field called RackAwarenessState
in our CRD. The operator populates this status field with two values, WaitingForRackAwareness
and Configured
.
When a broker fails
What happens if a broker fails? Will Kubernetes schedule it to a different zone? When a pod fails, the operator fetches all the available information from the node(s) - including zone and region - and tries to place it back into the zone it was previously in. If it can’t, the pod remains pending
.
To manually override this and schedule the broker into a different zone or region, set the broker.rack
config to the location of the broker node.
13 - Supported versions and compatibility matrix
This page shows you the list of supported Koperator versions, and the versions of other components they are compatible with.
Compatibility matrix
Operator Version | Apache Kafka Version | JMX Exporter Version | Cruise Control Version | Istio Operator Version | Example cluster CR | Maintained |
v0.18.3 | 2.6.2+ | 0.15.0 | 2.5.37 | 1.10 | link | - |
v0.19.0 | 2.6.2+ | 0.15.0 | 2.5.68 | 1.10 | link | - |
v0.20.0 | 2.6.2+ | 0.15.0 | 2.5.68 | 1.10 | link | - |
v0.20.2 | 2.6.2+ | 0.16.1 | 2.5.80 | 1.10 | link | - |
v0.21.0 | 2.6.2+ | 0.16.1 | 2.5.86 | 2.11 | link | - |
v0.21.1 | 2.6.2+ | 0.16.1 | 2.5.86 | 2.11 | link | - |
v0.21.2 | 2.6.2+ | 0.16.1 | 2.5.86 | 2.11 | link | - |
v0.22.0 | 2.6.2+ | 0.16.1 | 2.5.101 | 2.15.3 | link | + |
v0.23.0 | 2.6.2+ | 0.16.1 | 2.5.101 | 2.15.3 | link | + |
v0.24.0 | 2.6.2+ | 0.16.1 | 2.5.101 | 2.15.3 | link | + |
Available Koperator images
Image | Go version |
ghcr.io/banzaicloud/kafka-operator:v0.17.0 | 1.16 |
ghcr.io/banzaicloud/kafka-operator:v0.18.3 | 1.16 |
ghcr.io/banzaicloud/kafka-operator:v0.19.0 | 1.16 |
ghcr.io/banzaicloud/kafka-operator:v0.20.2 | 1.17 |
ghcr.io/banzaicloud/kafka-operator:v0.21.0 | 1.17 |
ghcr.io/banzaicloud/kafka-operator:v0.21.1 | 1.17 |
ghcr.io/banzaicloud/kafka-operator:v0.21.2 | 1.17 |
ghcr.io/banzaicloud/kafka-operator:v0.22.0 | 1.19 |
ghcr.io/banzaicloud/kafka-operator:v0.23.0 | 1.19 |
ghcr.io/banzaicloud/kafka-operator:v0.23.1 | 1.19 |
ghcr.io/banzaicloud/kafka-operator:v0.24.0 | 1.19 |
ghcr.io/banzaicloud/kafka-operator:v0.24.1 | 1.19 |
Available Apache Kafka images
Image | Java version |
ghcr.io/banzaicloud/kafka:2.13-2.6.2-bzc.1 | 11 |
ghcr.io/banzaicloud/kafka:2.13-2.7.0-bzc.1 | 11 |
ghcr.io/banzaicloud/kafka:2.13-2.7.0-bzc.2 | 11 |
ghcr.io/banzaicloud/kafka:2.13-2.8.0 | 11 |
ghcr.io/banzaicloud/kafka:2.13-2.8.1 | 11 |
ghcr.io/banzaicloud/kafka:2.13-3.1.0 | 17 |
Available JMX Exporter images
Image | Java version |
ghcr.io/banzaicloud/jmx-javaagent:0.14.0 | 11 |
ghcr.io/banzaicloud/jmx-javaagent:0.15.0 | 11 |
ghcr.io/banzaicloud/jmx-javaagent:0.16.1 | 11 |
Available Cruise Control images
Image | Java version |
ghcr.io/banzaicloud/cruise-control:2.5.23 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.28 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.34 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.37 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.43 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.53 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.68 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.80 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.86 | 11 |
ghcr.io/banzaicloud/cruise-control:2.5.101 | 11 |
14 - Support
The Koperator helps you create production-ready Apache Kafka cluster on Kubernetes, with scaling, rebalancing, and alerts based self healing.
If you encounter problems while using Koperator that the documentation does not address, open an issue or talk to us in our Slack channel #kafka-operator.
15 - Benchmarking Kafka
How to setup the environment for the Kafka Performance Test.
GKE
-
Create a test cluster with 3 nodes for ZooKeeper, 3 for Kafka, 1 Master node and 2 node for clients.
Once your cluster is up and running you can set up the Kubernetes infrastructure.
-
Create a StorageClass which enables high performance disk requests.
kubectl create -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
volumeBindingMode: WaitForFirstConsumer
EOF
EKS
-
Create a test cluster with 3 nodes for ZooKeeper, 3 for Kafka, 1 Master node and 2 node for clients.
Once your cluster is up and running you can set up the Kubernetes infrastructure.
-
Create a StorageClass which enables high performance disk requests.
kubectl create -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "50"
fsType: ext4
volumeBindingMode: WaitForFirstConsumer
EOF
Install other required components
-
Create a ZooKeeper cluster with 3 replicas using Pravega’s Zookeeper Operator.
helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com/
helm install zookeeper-operator --namespace=zookeeper --create-namespace pravega/zookeeper-operator
kubectl create -f - <<EOF
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
name: zookeeper-server
namespace: zookeeper
spec:
replicas: 3
EOF
-
Install the Koperator CustomResourceDefinition resources (adjust the version number to the Koperator release you want to install) and the corresponding version of Koperator, the Operator for managing Apache Kafka on Kubernetes.
kubectl create --validate=false -f https://github.com/banzaicloud/koperator/releases/download/v0.25.1/kafka-operator.crds.yaml
helm install kafka-operator --namespace=kafka --create-namespace banzaicloud-stable/kafka-operator
-
Create a 3-broker Kafka Cluster using this YAML file.
This will install 3 brokers with fast SSD. If you would like the brokers in different zones, modify the following configurations to match your environment and use them in the broker configurations:
apiVersion: kafka.banzaicloud.io/v1beta1
kind: KafkaCluster
...
spec:
...
brokerConfigGroups:
default:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: <node-label-key>
operator: In
values:
- <node-label-value-zone-1>
- <node-label-value-zone-2>
- <node-label-value-zone-3>
...
-
Create a client container inside the cluster
kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: kafka-test
spec:
containers:
- name: kafka-test
image: "wurstmeister/kafka:2.12-2.1.1"
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 3000; done;" ]
EOF
-
Exec into this client and create the perftest, perftest2, perftes3
topics.
For internal listeners exposed by a headless service (KafkaCluster.spec.headlessServiceEnabled
is set to true
):
kubectl exec -it kafka-test -n kafka bash
./opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-headless.kafka:29092 --topic perftest --create --replication-factor 3 --partitions 3
./opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-headless.kafka:29092 --topic perftest2 --create --replication-factor 3 --partitions 3
./opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-headless.kafka:29092 --topic perftest3 --create --replication-factor 3 --partitions 3
For internal listeners exposed by a regular service (KafkaCluster.spec.headlessServiceEnabled
set to false
):
kubectl exec -it kafka-test -n kafka bash
./opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-all-broker.kafka:29092 --topic perftest --create --replication-factor 3 --partitions 3
./opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-all-broker.kafka:29092 --topic perftest2 --create --replication-factor 3 --partitions 3
./opt/kafka/bin/kafka-topics.sh --bootstrap-server kafka-all-broker.kafka:29092 --topic perftest3 --create --replication-factor 3 --partitions 3
Monitoring environment is automatically installed. To monitor the infrastructure we used the official Node Exporter dashboard available with id 1860
.
Run the tests
-
Run performance test against the cluster, by building this Docker image.
docker build -t <yourname>/perfload:0.1.0 /loadgens
docker push <yourname>/perfload:0.1.0
-
Submit the performance testing application:
kubectl create -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: loadtest
name: perf-load
namespace: kafka
spec:
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
app: loadtest
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: loadtest
spec:
containers:
- args:
- -brokers=kafka-0:29092,kafka-1:29092,kafka-2:29092
- -topic=perftest
- -required-acks=all
- -message-size=512
- -workers=20
- -api-version=3.1.0
image: yourorg/yourimage:yourtag
imagePullPolicy: Always
name: sangrenel
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
EOF
16 - Delete the operator
In case you want to delete the Koperator from your cluster, note that because of dependencies between the various components, they must be deleted in specific order.
CAUTION:
It’s important to delete the Koperator deployment as the last step.
Uninstall Koperator
-
Delete the Prometheus instance used by the Kafka cluster. If you used the sample Prometheus instance from the Koperator repository you can use the following command, otherwise do this step manually according to the way you deployed the Prometheus instance.
kubectl delete \
-n kafka \
-f https://raw.githubusercontent.com/banzaicloud/koperator/0.25.1/config/samples/kafkacluster-prometheus.yaml
Expected output:
clusterrole.rbac.authorization.k8s.io/prometheus deleted
clusterrolebinding.rbac.authorization.k8s.io/prometheus deleted
prometheus.monitoring.coreos.com/kafka-prometheus deleted
prometheusrule.monitoring.coreos.com/kafka-alerts deleted
serviceaccount/prometheus deleted
servicemonitor.monitoring.coreos.com/cruisecontrol-servicemonitor deleted
servicemonitor.monitoring.coreos.com/kafka-servicemonitor deleted
-
Delete KafkaCluster Custom Resource (CR) that represent the Kafka cluster and Cruise Control.
kubectl delete kafkaclusters -n kafka kafka
Example output:
kafkacluster.kafka.banzaicloud.io/kafka deleted
Wait for the Kafka resources (Pods, PersistentVolumeClaims, Configmaps, etc) to be removed.
kubectl get pods -n kafka
Expected output:
NAME READY STATUS RESTARTS AGE
kafka-operator-operator-8458b45587-286f9 2/2 Running 0 62s
You would also need to delete other Koperator-managed CRs (if any) following the same fashion
Note: KafkaCluster, KafkaTopic and KafkaUser custom resources are protected with Kubernetes finalizers, so those won’t be actually deleted from Kubernetes until the Koperator removes those finalizers. After the Koperator has finished cleaning up everything, it removes the finalizers. In case you delete the Koperator deployment before it cleans up everything, you need to remove the finalizers manually.
-
Uninstall Koperator deployment.
helm uninstall kafka-operator -n kafka
Expected output:
release "kafka-operator" uninstalled
-
Delete Koperator Custom Resource Definitions (CRDs).
kubectl delete -f https://github.com/banzaicloud/koperator/releases/download/v0.25.1/kafka-operator.crds.yaml
Uninstall Prometheus operator
-
Uninstall the prometheus-operator deployment.
helm uninstall -n prometheus prometheus
Expected output:
release "prometheus" uninstalled
-
If no other cluster resources uses prometheus-operator CRDs, delete the prometheus-operator’s CRDs.
Note: Red Hat OpenShift clusters require those CRDs to function so do not delete those on such clusters.
kubectl get crd | grep 'monitoring.coreos.com'| awk '{print $1};' | xargs kubectl delete crd
Uninstall Zookeeper Operator
-
Delete Zookeeper CR.
kubectl delete zookeeperclusters -n zookeeper zookeeper-server
Expected output:
zookeeperclusters.zookeeper.pravega.io/zookeeper-server deleted
Wait for the Zookeeper resources (Deployment, PersistentVolumeClaims, Configmaps, etc) to be removed.
kubectl get pods -n zookeeper
Expected output:
NAME READY STATUS RESTARTS AGE
zookeeper-operator-5857967dcc-gm5l5 1/1 Running 0 3m22s
-
Uninstall the zookeeper-operator deployment.
helm uninstall zookeeper-operator -n zookeeper
-
If no other cluster resource uses Zookeeper CRDs, delete Zookeeper Operator’s CRDs
kubectl delete customresourcedefinition zookeeperclusters.zookeeper.pravega.io
Uninstall Cert-Manager
Uninstall with Helm
-
Uninstall cert-manager deployment.
helm uninstall -n cert-manager cert-manager
Expected output:
release "cert-manager" uninstalled
-
If no other cluster resource uses cert-manager CRDs, delete cert-manager’s CRDs:
kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml
Expected output:
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io deleted
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io deleted
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io deleted
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io deleted
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io deleted
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io deleted
17 - Tips and tricks for the Koperator
Rebalancing
The Koperator installs Cruise Control (CC) to oversee your Kafka cluster. When you change the cluster (for example, add new nodes), the Koperator engages CC to perform a rebalancing if needed. How and when CC performs rebalancing depends on its settings (see goal settings in the official CC documentation) and on how long CC was trained with Kafka’s behavior (this may take weeks).
You can also trigger rebalancing manually from the CC UI:
kubectl port-forward -n kafka svc/kafka-cruisecontrol-svc 8090:8090
Cruise Control UI will be available at http://localhost:8090.
Headless service
When the headlessServiceEnabled option is enabled (true) in your KafkaCluster CR, it creates a headless service for accessing the kafka cluster from within the Kubernetes cluster.
When the headlessServiceEnabled option is disabled (false), it creates a ClusterIP service. When using a ClusterIP service, your client application doesn’t need to be aware of every Kafka broker endpoint, it simply connects to kafka-all-broker:29092 which covers dynamically all the available brokers. That way if the Kafka cluster is scaled dynamically, there is no need to reconfigure the client applications.
Retrieving broker configuration during downscale operation
When a broker is downscaling, the broker configuration is missing from the kafkaCluster/spec/brokers field. You can retrieve the last broker configuration with the following command.
echo <value of the kafkaCluster/status/brokerState/brokerID/configurationBackup> | base64 -d | gzip -d
18 - Troubleshooting the operator
The following tips and commands can help you to troubleshoot your Koperator installation.
First things to do
-
Verify that the Koperator pod is running. Issue the following command: kubectl get pods -n kafka|grep kafka-operator
The output should include a running pod, for example:
NAME READY STATUS RESTARTS AGE
kafka-operator-operator-6968c67c7b-9d2xq 2/2 Running 0 10m
-
Verify that the Kafka broker pods are running. Issue the following command: kubectl get pods -n kafka
The output should include a numbered running pod for each broker, with names like kafka-0-zcxk7, kafka-1-2nhj5, and so on, for example:
NAME READY STATUS RESTARTS AGE
kafka-0-zcxk7 1/1 Running 0 3h16m
kafka-1-2nhj5 1/1 Running 0 3h15m
kafka-2-z4t84 1/1 Running 0 3h15m
kafka-cruisecontrol-7f77ccf997-cqhsw 1/1 Running 1 3h15m
kafka-operator-operator-6968c67c7b-9d2xq 2/2 Running 0 3h17m
prometheus-kafka-prometheus-0 2/2 Running 1 3h16m
-
If you see any problems, check the logs of the affected pod, for example:
kubectl logs kafka-0-zcxk7 -n kafka
-
Check the status (State) of your resources. For example:
kubectl get KafkaCluster kafka -n kafka -o jsonpath="{.status}" |jq
-
Check the status of your ZooKeeper deployment, and the logs of the zookeeper-operator and zookeeper pods.
kubectl get pods -n zookeeper
Check the KafkaCluster configuration
You can display the current configuration of your Kafka cluster using the following command:
kubectl describe KafkaCluster kafka -n kafka
The output looks like the following:
apiVersion: kafka.banzaicloud.io/v1beta1
kind: KafkaCluster
metadata:
creationTimestamp: "2022-11-21T16:02:55Z"
finalizers:
- finalizer.kafkaclusters.kafka.banzaicloud.io
- topics.kafkaclusters.kafka.banzaicloud.io
- users.kafkaclusters.kafka.banzaicloud.io
generation: 4
labels:
controller-tools.k8s.io: "1.0"
name: kafka
namespace: kafka
resourceVersion: "3474369"
uid: f8744017-1264-47d4-8b9c-9ee982728ecc
spec:
brokerConfigGroups:
default:
storageConfigs:
- mountPath: /kafka-logs
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
terminationGracePeriodSeconds: 120
brokers:
- brokerConfigGroup: default
id: 0
- brokerConfigGroup: default
id: 1
clusterImage: ghcr.io/banzaicloud/kafka:2.13-3.1.0
cruiseControlConfig:
clusterConfig: |
{
"min.insync.replicas": 3
}
config: |
...
cruiseControlTaskSpec:
RetryDurationMinutes: 0
disruptionBudget: {}
envoyConfig: {}
headlessServiceEnabled: true
istioIngressConfig: {}
listenersConfig:
externalListeners:
- containerPort: 9094
externalStartingPort: 19090
name: external
type: plaintext
internalListeners:
- containerPort: 29092
name: plaintext
type: plaintext
usedForInnerBrokerCommunication: true
- containerPort: 29093
name: controller
type: plaintext
usedForControllerCommunication: true
usedForInnerBrokerCommunication: false
monitoringConfig: {}
oneBrokerPerNode: false
readOnlyConfig: |
auto.create.topics.enable=false
cruise.control.metrics.topic.auto.create=true
cruise.control.metrics.topic.num.partitions=1
cruise.control.metrics.topic.replication.factor=2
rollingUpgradeConfig:
failureThreshold: 1
zkAddresses:
- zookeeper-server-client.zookeeper:2181
status:
alertCount: 0
brokersState:
"0":
configurationBackup: H4sIAAAAAAAA/6pWykxRsjLQUUoqys9OLXLOz0vLTHcvyi8tULJSSklNSyzNKVGqBQQAAP//D49kqiYAAAA=
configurationState: ConfigInSync
gracefulActionState:
cruiseControlState: GracefulUpscaleSucceeded
volumeStates:
/kafka-logs:
cruiseControlOperationReference:
name: kafka-rebalance-bhs7n
cruiseControlVolumeState: GracefulDiskRebalanceSucceeded
image: ghcr.io/banzaicloud/kafka:2.13-3.1.0
perBrokerConfigurationState: PerBrokerConfigInSync
rackAwarenessState: ""
version: 3.1.0
"1":
configurationBackup: H4sIAAAAAAAA/6pWykxRsjLUUUoqys9OLXLOz0vLTHcvyi8tULJSSklNSyzNKVGqBQQAAP//pYq+WyYAAAA=
configurationState: ConfigInSync
gracefulActionState:
cruiseControlState: GracefulUpscaleSucceeded
volumeStates:
/kafka-logs:
cruiseControlOperationReference:
name: kafka-rebalance-bhs7n
cruiseControlVolumeState: GracefulDiskRebalanceSucceeded
image: ghcr.io/banzaicloud/kafka:2.13-3.1.0
perBrokerConfigurationState: PerBrokerConfigInSync
rackAwarenessState: ""
version: 3.1.0
cruiseControlTopicStatus: CruiseControlTopicReady
listenerStatuses:
externalListeners:
external:
- address: a0abb7ab2e4a142d793f0ec0cb9b58ae-1185784192.eu-north-1.elb.amazonaws.com:29092
name: any-broker
- address: a0abb7ab2e4a142d793f0ec0cb9b58ae-1185784192.eu-north-1.elb.amazonaws.com:19090
name: broker-0
- address: a0abb7ab2e4a142d793f0ec0cb9b58ae-1185784192.eu-north-1.elb.amazonaws.com:19091
name: broker-1
internalListeners:
plaintext:
- address: kafka-headless.kafka.svc.cluster.local:29092
name: headless
- address: kafka-0.kafka-headless.kafka.svc.cluster.local:29092
name: broker-0
- address: kafka-1.kafka-headless.kafka.svc.cluster.local:29092
name: broker-1
rollingUpgradeStatus:
errorCount: 0
lastSuccess: ""
state: ClusterRunning
Getting Support
If you encounter any problems that the documentation does not address, file an issue or talk to us on the Banzai Cloud Slack channel #kafka-operator.
Various support channels are also available for Koperator.
Before asking for help, prepare the following information to make troubleshooting faster:
- Koperator version
- Kubernetes version (kubectl version)
- Helm/chart version (if you installed the Koperator with Helm)
- Koperator logs, for example kubectl logs kafka-operator-operator-6968c67c7b-9d2xq manager -n kafka and kubectl logs kafka-operator-operator-6968c67c7b-9d2xq kube-rbac-proxy -n kafka
- Kafka broker logs
- Koperator configuration
- Kafka cluster configuration (kubectl describe KafkaCluster kafka -n kafka)
- ZooKeeper configuration (kubectl describe ZookeeperCluster zookeeper-server -n zookeeper)
- ZooKeeper logs (kubectl logs zookeeper-operator-5c9b597bcc-vkdz9 -n zookeeper)
Do not forget to remove any sensitive information (for example, passwords and private keys) before sharing.
18.1 - Common errors
Upgrade failed
If you get the following error in the logs of the Koperator, update your KafkaCluster CRD. This error typically occurs when you upgrade your Koperator to a new version, but forget to update the KafkaCluster CRD.
Error: UPGRADE FAILED: cannot patch "kafka" with kind KafkaCluster: KafkaCluster.kafka.banzaicloud.io "kafka" is invalid
The recommended way to upgrade the Koperator is to upgrade the KafkaCluster CRD, then update the Koperator. For details, see Upgrade the operator.
19 - Developer Guide
Contributing
If you find this project useful here’s how you can help:
- Send a pull request with your new features and bug fixes
- Help new users with issues they may encounter
- Support the development of this project and star this repo!
When you are opening a PR to Koperator the first time we will require you to sign a standard CLA.
How to run Koperator in your cluster with your changes
Koperator is built on the kubebuilder project.
To build the operator and run tests:
- Run
make
If you make changes and would like to try your own version, create your own image:
make docker-build IMG={YOUR_USERNAME}/kafka-operator:v0.0.1
make docker-push IMG={YOUR_USERNAME}/kafka-operator:v0.0.1
make deploy IMG={YOUR_USERNAME}/kafka-operator:v0.0.1
Watch the operator’s logs with:
kubectl logs -f -n kafka kafka-operator-controller-manager-0 -c manager
Alternatively, run the operator on your machine:
export $KUBECONFIG
make install
make run
Create CR and let the operator set up Kafka in your cluster (you can change the spec
of Kafka
for your needs in the yaml file):
Remember you need an Apache ZooKeeper server to run Kafka
kubectl create -n kafka -f config/samples/simplekafkacluster.yaml
Limitations on minikube
Minikube does not have a load balancer implementation, thus our envoy service will not get an external IP and the operator will get stuck at this point.
A possible solution to overcome this problem is to use https://github.com/elsonrodriguez/minikube-lb-patch. The operator will be able to proceed if you run the following command:
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
20 - License of Koperator
Copyright (c) 2019 Banzai Cloud, Inc.
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.