This section describes the values you can use in a cluster descriptor file when creating Pipeline Kubernetes Engine (PKE) clusters on AWS. This file specifies the properties of the cluster you want to create. For details on how to create the cluster, see Create cluster — Banzai Cloud PKE on AWS.

name (string) 🔗︎

The name of the cluster. It must be unique within the Pipeline instance. For example: my-test-cluster-2020-07-22

location (string) 🔗︎

The region of the cluster, for example: us-east-2. To list the regions where PKE is available, you can use the following command:

curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/pke/regions | jq .

cloud (string) 🔗︎

The name of the cloud provider to use. In this scenario, it must be “amazon”.

secretName (string) 🔗︎

The name of an existing Pipeline secret to use, for example, my-aws-secret. If you don’t already have a secret, create one. For details, see Create an AWS secret.

properties (object) 🔗︎

The detailed specification of the cluster. For PKE clusters, it must include the ‘pke’ object. For example:

"pke": {
  "nodepools": [
    {
      "name": "master",
      "roles": [
        "master"
      ],
      "provider": "amazon",
      "autoscaling": false,
      "providerConfig": {
        "autoScalingGroup": {
          "name": "master",
          "zones": [
            "eu-west-2a"
          ],
          "instanceType": "c5.large",
          "launchConfigurationName": "master",
          "spotPrice": "0.101",
          "size": {
            "desired": 1,
            "min": 1,
            "max": 1
          }
        }
      }
    },
    {
      "name": "pool1",
      "roles": [
        "worker"
      ],
      "provider": "amazon",
      "autoscaling": true,
      "providerConfig": {
        "autoScalingGroup": {
          "name": "pool1",
          "zones": [
            "eu-west-2a"
          ],
          "instanceType": "m4.large",
          "launchConfigurationName": "pool1",
          "spotPrice": "0.116",
          "size": {
            "desired": 3,
            "min": 3,
            "max": 4
          }
        }
      }
    }
  ],
  "kubernetes": {
    "version": "1.17.9",
    "rbac": {
      "enabled": true
    }
  },
  "cri": {
    "runtime": "containerd"
  }
}

properties.pke (object) 🔗︎

The detailed specification of the PKE cluster. The following properties are required:

Container runtime configuration - properties.pke.cri (object) 🔗︎

The container runtime (either containerd or docker) to use and its configuration. By default, Pipeline uses containerd. For example:

"cri": {
  "runtime": "containerd"
}

Kubernetes configuration - properties.pke.kubernetes (object) 🔗︎

Specifies the configuration details of the Kubernetes nodes, for example:

"kubernetes": {
    "version": "1.17.9",
    "rbac": true
  },

The following properties are required:

properties.pke.kubernetes.oidc (object) 🔗︎

Set kubernetes.oidc.enabled to true to enable OIDC authentication to the API server.

properties.pke.kubernetes.rbac (true | false) 🔗︎

properties.pke.kubernetes.version (string) 🔗︎

The Kubernetes version to run on the cluster. To list the supported versions and the default version in a given region, run the following command:

curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/pke/regions/us-east-2/versions | jq .

For the list of supported versions for different cloud providers, see Provider support.

Network configuration - properties.pke.network (object) 🔗︎

Specifies the networking settings of the cluster.

The following properties are required:

properties.pke.network.apiServerAddress (string) 🔗︎

The API address of the Kubernetes API server, for example, 10.240.0.204

properties.pke.network.podCIDR (string) 🔗︎

The CIDR range for the pods, for example, 10.32.0.0/24

properties.pke.network.provider (string) 🔗︎

The networking provider to use on the cluster. One of: calico, cilium, or weave.

properties.pke.network.cloudProviderConfig (object) 🔗︎

The detailed configuration of the networking provider set in properties.pke.network.provider.

properties.pke.network.serviceCIDR (string) 🔗︎

The CIDR where services are exposed, for example, 10.32.0.0/24

Node pools - properties.pke.nodePools (object) 🔗︎

The properties.pke.nodePools object defines the node pools and the nodes of the cluster. Each node pool has an identifier (for example, pool1, pool2, or something more meaningful), and contains the properties of the node pool.

The following properties are required:

For example:

"properties": {
  "pke": {
    "nodepools": [
      {
        "name": "master",
        "roles": [
          "master"
        ],
        "provider": "amazon",
        "autoscaling": false,
        "providerConfig": {
          "autoScalingGroup": {
            "name": "master",
            "zones": [
              "eu-west-2a"
            ],
            "instanceType": "c5.large",
            "launchConfigurationName": "master",
            "spotPrice": "0.101",
            "size": {
              "desired": 1,
              "min": 1,
              "max": 1
            }
          }
        }
      },
      {
        "name": "pool1",
        "roles": [
          "worker"
        ],
        "provider": "amazon",
        "autoscaling": true,
        "providerConfig": {
          "autoScalingGroup": {
            "name": "pool1",
            "zones": [
              "eu-west-2a"
            ],
            "instanceType": "m4.large",
            "launchConfigurationName": "pool1",
            "spotPrice": "0.116",
            "size": {
              "desired": 3,
              "min": 3,
              "max": 4
            }
          }
        }
      }
    ],

autoscaling (true | false) 🔗︎

If autoscaling is enabled, the number of nodes in the node pool can be increased and decreased according to the load of the cluster. You can set the minimum and maximum number of nodes in the providerConfig.autoScalingGroup.size.min and providerConfig.autoScalingGroup.size.max values, respectively.

labels (object) 🔗︎

A list of labels to add to the nodes of the node pool. For example:

{"example.io/label1":"value1"}

name (string) 🔗︎

The name of the node pool.

provider (string) 🔗︎

The name of the cloud provider to use. In this scenario, it must be “amazon”.

providerConfig (object) 🔗︎

Contains the specifications of the AWS Auto Scaling Group used to run the node pool, within an autoScalingGroup object.

"providerConfig": {
  "autoScalingGroup": {
    ...
  }
}

The following properties are available in the nodePools.providerConfig.autoScalingGroup object:

nodePools.providerConfig.autoScalingGroup.image (string) 🔗︎

The ID of the machine image to run on the node, for example, ami-06d1667f.

nodePools.providerConfig.autoScalingGroup.instanceType (string) 🔗︎

The type of virtual computer to use for the node, for example, “t2.medium” or “c5.large”.

You can check the available regions and instance types in our Cloudinfo service.

nodePools.providerConfig.autoScalingGroup.launchConfigurationName (string) 🔗︎

nodePools.providerConfig.autoScalingGroup.name (string) 🔗︎

The configuration of the Auto Scaling group.

nodePools.providerConfig.autoScalingGroup.securityGroupID (string) 🔗︎

To security group for the VPC that your node pool will launch into. For example: “sg-0b3ddd7bef209c1d0”

Node pool size - nodePools.providerConfig.autoScalingGroup.size (object) 🔗︎

Specifies the minimal and maximal number of nodes in the node Pool, for example:

"size": {
  "min": 3,
  "max": 4
},

If autoscaling is enabled, the number of nodes is between the min and max values, depending on the load.

nodePools.providerConfig.autoScalingGroup.spotPrice (string) 🔗︎

If not empty, this value shows the price (in USD) that you are willing to pay to run the node pool on spot instances. If empty, the node pool uses only on-demand instances. Spot instances allow you to use unused EC2 capacity in the AWS cloud with some technical limitations and a lower price compared to On-Demand instances.

nodePools.providerConfig.autoScalingGroup.subnets (object) 🔗︎

See Subnets.

Add custom tags - nodePools.providerConfig.autoScalingGroup.tags (object) 🔗︎

In addition to the usual Pipeline-managed tags, you can add custom tags (key-value pairs) to the node pools. The following example defines a nodepool and also adds customTag1 and customTag2 to the resources created by CloudFormation stacks used to create the cluster.

  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "pool1",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": true,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-west-2a"
              ],
              "instanceType": "m4.large",
              "launchConfigurationName": "pool1",
              "spotPrice": "0.116",
              "size": {
                "desired": 3,
                "min": 3,
                "max": 4
              },
              "tags": {
                "customTag1": "customValue1",
                "customTag2": "customValue2"
              },
              "vpcID": "vpc-002ac93c996bf18d4",
              "subnets": [
                "subnet-0ccbba430fa2123da"
              ]
            }
          }
        }
      ],

volumeSize - nodePools.providerConfig.autoScalingGroup.volumeSize (integer) 🔗︎

The explicit size of the EBS volume in GiB to allocate for each node in the node pool. When no explicit node volume size is specified, the Pipeline instance’s default node volume size configuration value is used instead. When neither an explicit node volume size is requested, nor a Pipeline instance default node volume size is configured, the volume size will be equal to the AMI size, but at least 50 GiB.

In every case, the node volume size MUST be greater than or equal to the size of the Amazon Machine Image (AMI) specified at the image property or the default image when no explicit image ID is provided.

vpcId - nodePools.providerConfig.autoScalingGroup.vpcId (object) 🔗︎

See Virtual Private Cloud.

zones - nodePools.providerConfig.autoScalingGroup.zones (list) 🔗︎

List of the AWS availability zones to use for the cluster. Pipeline uses this list if there are no subnets specified for the auto scaling group, and Pipeline creates subnets automatically. (Specifying subnets determines the availability zones as well.)

zones: ["eu-central-1b"]

nodePools.roles (list) 🔗︎

The cluster roles that the node pool is used for: master, system, worker. For example:

"roles": [
  "master"
],

Note: Autoscaling is not supported on the master node pool.

Proxy configuration - properties.pke.proxy (object) 🔗︎

Specifies the HTTP and HTTPS proxy that the cluster uses, for example:

"proxy": {
  "http": {
    "host": "proxy.example.com`",
    "port": 9091
  }
},

properties.pke.proxy.exceptions (list) 🔗︎

A list of domain names that do not require a proxy. For example:

"exceptions": ["example.com", "example.net"]

properties.pke.proxy.http (object) 🔗︎

Properties of the HTTP proxy to use. The following properties are required:

  • host

host (string) 🔗︎

The host of the proxy, for example, proxy.example.com

port (integer) 🔗︎

The port number the proxy is available on.

secretId (string) 🔗︎

The ID of the secret containing the username and password for the proxy.

scheme (string) 🔗︎

The scheme of the proxy.

properties.pke.proxy.https (object) 🔗︎

Properties of the HTTPS proxy to use. The available properties are the same as the http proxy properties.

Subnets - properties.network.cloudProviderConfig.subnets (object) 🔗︎

The properties.network.cloudProviderConfig.subnets object contains the details of the subnets where the cluster will be created. It contains a list of subnets: reference existing subnets using their ID, or specify new subnets using their availability zone and CIDR. You must list at least two subnets. All worker nodes are launched in the same subnet, in the first subnet in the list - which is not necessarily the first subnet in the cluster create request payload, as the deserialization may change the order. You can override this by setting the vpcID and subnets fields at the node pool level.

Note: The subnets must have appropriate routing set up to allow outbound access to Internet for worker nodes.

For example:

 "subnets": [
  {
    "subnetId": "{{subnet-us-east-2a-zone}}"
  },
  {
    "subnetId": "{{subnet-us-east-2b-zone}}"
  },
  {
    "subnetId": "{{subnet-us-east-2c-zone}}"
  }
]

or

"subnets": [
  {
    "cidr": "192.168.64.0/20",
    "availabilityZone": "us-east-2a"
  },
  {
    "cidr": "192.168.80.0/20",
    "availabilityZone": "us-east-2b"
  }
]

availabilityZone (string) 🔗︎

The Availability Zone (AZ) where the subnet is created.

subnetId (string) 🔗︎

The identifier of an existing subnet to be used for creating the EKS cluster. If subnetId is not provided, a new subnet is created for the cluster. When creating a new subnet, you can specify the CIDR range for the subnet.

cidr (string) 🔗︎

The CIDR range for the subnet when a new subnet is created. If not provided, a default value is generated for each subnet.

Virtual Private Cloud (VPC) - properties.network.cloudProviderConfig.vpc (object) 🔗︎

The properties.network.cloudProviderConfig.vpc object contains the details of the VPC where the cluster will be created. Every node of the cluster is created in the VPC and Subnet specified under the cloudProviderConfig field by default. You can override this at node pool level by setting the vpcID field.

"vpc": {
  "vpcId": "{{vpc-id}}"
},

vpcId (string) 🔗︎

The identifier of an existing Virtual Private Cloud (VPC) to be used for creating the EKS cluster. If vpcId is not provided, a new VPC is created for the cluster. When creating a new VPC, you can specify the CIDR range for the VPC.

cidr (string) 🔗︎

The CIDR range for the VPC when a new VPC is created. If not provided, the default value is used: 192.168.0.0/16.

Example: a cluster with a single worker node pool 🔗︎

The following command creates a PKE cluster on AWS with a single-node master node pool and a 3-node worker node pool called “pool1”. All nodes of the cluster have the same instance type (t2.medium). Autoscaling is enabled, so under high load the worker node pool will have 4 nodes.

banzai cluster create <<EOF
{
  "name": "pke-aws-cluster",
  "location": "eu-west-1",
  "cloud": "amazon",
  "secretName": "my-aws-secret",
  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "pool1",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": true,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "t2.medium",
              "launchConfigurationName": "pool1",
              "size": {
                "min": 3,
                "max": 4
              }
            }
          }
        },
        {
          "name": "master",
          "roles": [
            "master"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "t2.medium",
              "launchConfigurationName": "master",
              "size": {
                "min": 1,
                "max": 1
              }
            }
          }
        }
      ],
      "kubernetes": {
        "version": "pke-version",
        "rbac": {
          "enabled": true
        }
      },
      "cri": {
        "runtime": "containerd"
      }
    }
  }
}
EOF

Example: Multiple node pools on spot instances 🔗︎

The following example creates a heterogenous PKE cluster with three different node pools (called pool1, pool2, and master). The node pools use different instance types. The worker node pools are created from spot instances, because the spotPrice option is set for the pool1 and pool2 node pools.

banzai cluster create <<EOF
{
  "name": "multi-nodepool-cluster",
  "location": "eu-west-1",
  "cloud": "amazon",
  "secretName": "my-aws-secret",
  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "master",
          "roles": [
            "master"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "master",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "c5.large",
              "launchConfigurationName": "master",
              "size": {
                "min": 1,
                "max": 1
              }
            }
          }
        },
        {
          "name": "pool1",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": true,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "t2.medium",
              "launchConfigurationName": "pool1",
              "spotPrice": "0.05",
              "size": {
                "min": 3,
                "max": 4
              }
            }
          }
        },
        {
          "name": "pool2",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": true,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool2",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "c4.2xlarge",
              "launchConfigurationName": "pool2",
              "spotPrice": "0.45",
              "size": {
                "min": 3,
                "max": 4
              }
            }
          }
        }
      ],
      "kubernetes": {
        "version": "1.18.6",
        "rbac": {
          "enabled": true
        }
      },
      "cri": {
        "runtime": "containerd"
      }
    }
  }
}
EOF

Example: Multi-AZ PKE cluster on AWS 🔗︎

The following command creates a PKE cluster with nodes distributed across existing subnets supporting various advanced uses cases that require spreading worker nodes across multiple subnets and AZs. Every node of the cluster is created in the VPC and Subnet specified under the cloudProviderConfig field by default. You can override this at node pool level by setting the vpcID and subnets fields.

Note: The subnets must have appropriate networking set up to allow outbound access to Internet for worker nodes.

banzai cluster create <<EOF
{
  "name": "pke-aws-cluster",
  "location": "us-east-2",
  "cloud": "amazon",
  "secretName": "pke-aws",
  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "master",
          "roles": [
            "master"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "master",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "c5.large",
              "spotPrice": "",
              "size": {
                "desired": 3,
                "min": 3,
                "max": 3
              },
              "vpcID": "{{vpc-id}}",
              "subnets": [
                "{{subnet-us-east-2a-zone}}"
              ]
            }
          }
        },
        {
          "name": "pool1",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": true,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "instanceType": "t2.large",
              "spotPrice": "0.2",
              "size": {
                "desired": 1,
                "min": 1,
                "max": 3
              },
              "vpcID": "{{vpc-id}}",
              "subnets": [
                "{{subnet-us-east-2b-zone}}"
              ]
            }
          }
        }
      ],
      "kubernetes": {
        "version": "{{pke-version}}",
        "rbac": {
          "enabled": true
        }
      },
      "cri": {
        "runtime": "containerd"
      },
      "network": {
        "cloudProviderConfig": {
          "vpcID": "{{vpc-id}}",
          "subnets": [
            "{{subnet-us-east-2a-zone}}"
          ]
        }
      }
    }
  }
}
EOF

Example: Multi-AZ PKE cluster without VPC 🔗︎

The following command creates a PKE cluster that has four single-node node pools in different zones (the zones property is different for some nodepools).

banzai cluster create <<EOF
{
  "name": "pke-multi-zone-cluster",
  "location": "eu-north-1",
  "cloud": "amazon",
  "secretName": "aws-trash",
  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "master",
          "roles": [
            "master"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "master",
              "zones": [
                "eu-north-1a"
              ],
              "instanceType": "c5.large",
              "launchConfigurationName": "master",
              "spotPrice": "0.091",
              "size": {
                "desired": 1,
                "min": 1,
                "max": 1
              }
            }
          }
        },
        {
          "name": "pool1",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-north-1a"
              ],
              "instanceType": "c5.xlarge",
              "launchConfigurationName": "pool1",
              "spotPrice": "0.18",
              "size": {
                "desired": 1,
                "min": 1,
                "max": 1
              }
            }
          },
          "labels": {
            "node-role.kubernetes.io/kafka": "kafka"
          }
        },
        {
          "name": "pool2",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-north-1b"
              ],
              "instanceType": "c5.xlarge",
              "launchConfigurationName": "pool1",
              "spotPrice": "0.18",
              "size": {
                "desired": 1,
                "min": 1,
                "max": 1
              }
            }
          },
          "labels": {
            "node-role.kubernetes.io/kafka": "kafka"
          }
        },
        {
          "name": "pool3",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "zones": [
                "eu-north-1c"
              ],
              "instanceType": "c5.xlarge",
              "launchConfigurationName": "pool1",
              "spotPrice": "0.18",
              "size": {
                "desired": 1,
                "min": 1,
                "max": 1
              }
            }
          },
          "labels": {
            "node-role.kubernetes.io/kafka": "kafka"
          }
        },
        {
          "name": "pool4",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool2",
              "zones": [
                "eu-north-1a"
              ],
              "instanceType": "c5.xlarge",
              "launchConfigurationName": "pool2",
              "spotPrice": "0.18",
              "size": {
                "desired": 2,
                "min": 2,
                "max": 2
              }
            }
          },
          "labels": {
            "node-role.kubernetes.io/app": "app"
          }
        }
      ],
      "kubernetes": {
        "version": "1.18.6",
        "rbac": {
          "enabled": true
        }
      },
      "cri": {
        "runtime": "containerd"
      }
    }
  }
}
EOF