Overview

Banzai Cloud Pipeline Kubernetes Engine (PKE) is a simple, secure and powerful CNCF certified Kubernetes distribution, the preferred Kubernetes run-time of the Pipeline platform.

This quickstart will guide you through the steps needed to set up a PKE cluster on AWS with Banzai Cloud Pipeline.

Prerequisites

  • AWS credentials
  • Banzai CLI tool authenticated against the Pipeline instance

Create an AWS secret

In order to access resources on AWS the appropriate credentials need to be registered in the Banzai Cloud Pipeline’s secret store. (The reference to this secret will be used later on instead of passing the credentials around)

Follow this guide to create PKE AWS credentials

The following valuas are needed for the secret:

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region name

You can simply do this with the following command (replace the values in the mustache brackets) :

banzai secret create <<EOF 
{
    "name": "pke-aws",
    "type": "amazon",
    "values": {
      "AWS_ACCESS_KEY_ID": "{{aws_access_key_id}}",
      "AWS_DEFAULT_REGION": "{{aws_default_region}}",
      "AWS_SECRET_ACCESS_KEY": "{{aws_secret_access_key}}"
    }
  }
EOF
Id                                                                Name     Type    UpdatedBy  Tags
b32343e28d37e09c26d91b4271eaa8dd689b16d9f1aba07fdc73af2a27750309  pke-aws  amazon  lpuskas    []

Alteranatively you could use the --magic flag for creating the secret provided the AWS credentials are available in the local environment:

banzai secret create -t amazon -n pke-aws-secret --magic --tag ""

Create a PKE cluster on AWS

Available instance types can be found from our Cloudinfo service. AWS regions where the PKE service is supported can be retrieved from the Cloudinfo service:

curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/pke/regions | jq .

supported PKE versions in a given region can be retrieved with the command:

curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/pke/regions/us-east-2/versions |jq .

Create a single node PKE cluster on AWS

The command below creates a single node PKE cluster on aws (both the naster and worker roles will be assigned to the node)

banzai cluster create <<EOF
{
  "cloud": "amazon",
  "location": "us-east-2",
  "name": "pke-aws-cluster",
  "properties": {
    "pke": {
      "cri": {
        "runtime": "containerd"
      },
      "kubernetes": {
        "rbac": {
          "enabled": true
        },
        "version": "{{pke-version}}"
      },
      "nodePools": [
        {
          "autoscaling": false,
          "name": "master",
          "provider": "amazon",
          "providerConfig": {
            "autoScalingGroup": {
              "instanceType": "c5.large",
              "size": {
                "desired": 1,
                "max": 1,
                "min": 1
              },
              "spotPrice": "",
              "zones": [
                "us-east-2a"
              ]
            }
          },
          "roles": [
            "master",
            "worker"
          ]
        }
      ]
    }
  },
  "secretName": "pke-aws"
}
EOF

INFO[0011] cluster is being created
INFO[0011] you can check its status with the command `banzai cluster get "pke-aws-cluster"`
Id    Name
1     pke-aws-cluster

Create a multi node PKE cluster on AWS

The following example creates a multinode PKE cluster (the nodePools property has multiple elements)

banzai cluster create <<EOF
{
  "cloud": "amazon",
  "location": "us-east-2",
  "name": "pke-aws-cluster",
  "properties": {
    "pke": {
      "cri": {
        "runtime": "containerd"
      },
      "kubernetes": {
        "rbac": {
          "enabled": true
        },
        "version": "{{pke-version}}"
      },
      "nodePools": [
        {
          "autoscaling": false,
          "name": "master",
          "provider": "amazon",
          "providerConfig": {
            "autoScalingGroup": {
              "instanceType": "c5.large",
              "size": {
                "desired": 1,
                "max": 1,
                "min": 1
              },
              "spotPrice": "",
              "zones": [
                "us-east-2a"
              ]
            }
          },
          "roles": [
            "master",
          ]
        },
        {
          "autoscaling": true,
          "name": "worker",
          "provider": "amazon",
          "providerConfig": {
            "autoScalingGroup": {
              "instanceType": "t2.medium",
              "size": {
                "desired": 2,
                "max": 3,
                "min": 2
              },
              "spotPrice": "0.0464"
            }
          },
          "roles": [
            "worker",
          ]
        },
      ]
    }
  },
  "secretName": "pke-aws"
}
EOF

Notable differencies compared to the first example:

  • the master and the worker nodes reside in separate nodepools in this case; you can add multiple nodepools — one for each instance type with the desired role;
  • the worker nodepool requires at least 2 nodes and has the autoscaling enabled
  • the worker nodepool contains spot instances — the spotPrice property is set.

Create a Multi-AZ PKE cluster on AWS

The command below creates a PKE cluster with nodes distributed across existing subnets supporting various advanced uses cases that require spreading worker nodes across multiple subnets and AZs. The subnets must have appropriate routing set up to allow outbound access to Internet for worker nodes.

banzai cluster create <<EOF
{
  "name": "pke-aws-cluster",
  "location": "us-east-2",
  "cloud": "amazon",
  "secretName": "pke-aws",
  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "master",
          "roles": [
            "master"
          ],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "master",
              "zones": [
                "eu-west-1a"
              ],
              "instanceType": "c5.large",
              "spotPrice": "",
              "size": {
                "desired": 3,
                "min": 3,
                "max": 3
              },
              "vpcID": "{{vpc-id}}",
              "subnets": [
                "{{subnet-us-east-2a-zone}}"
              ]
            }
          }
        },
        {
          "name": "pool1",
          "roles": [
            "worker"
          ],
          "provider": "amazon",
          "autoscaling": true,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool1",
              "instanceType": "t2.large",
              "spotPrice": "0.2",
              "size": {
                "desired": 1,
                "min": 1,
                "max": 3
              },
              "vpcID": "{{vpc-id}}",
              "subnets": [
                "{{subnet-us-east-2b-zone}}"
              ]
            }
          }
        }
      ],
      "kubernetes": {
        "version": "{{pke-version}}",
        "rbac": {
          "enabled": true
        }
      },
      "cri": {
        "runtime": "containerd"
      },
      "network": {
        "cloudProviderConfig": {
          "vpcID": "{{vpc-id}}",
          "subnets": [
            "{{subnet-us-east-2a-zone}}"
          ]
        }
      }
    }
  }
}
EOF

All the nodes of the cluster will be created in the VPC and Subnet specified under the cloudProviderConfig field by default. These can be overridden at node pool level through the vpcID and subnets fields as in the above example.

Check the status of the cluster

You can check the status of the cluster creation with the following command:

banzai cluster get "pke-aws-cluster"

Once the cluster is ready, you can try it with some simple commands. banzai cluster shell executes a shell within the context of the selected cluster. If you type a command in the shell opened, or pass it as arguments, it will be executed in a prepared environment. For example, you can list the nodes of the cluster using the original kubectl command:

banzai cluster shell --cluster-name "pke-aws-cluster" -- kubectl get nodes

Further steps

If you are happy with the results, go on with the Deploying workload guide to learn about the basic features of a cluster.