Overview

Amazon Elastic Kubernetes Service (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS.

This quickstart will guide you through the steps needed to set up an Amazon EKS cluster with Banzai Cloud Pipeline.

Prerequisites

  • AWS credentials
  • Banzai CLI tool authenticated against the Pipeline instance

Create an AWS secret

In order to access resources on AWS the appropriate credentials need to be registered in the Banzai Cloud Pipeline’s secret store. (The reference to this secret will be used later on instead of passing the credentials around)

Follow this guide to create EKS AWS credentials

The following valuas are needed for the secret:

  • AWS Access Key ID
  • AWS Secret Access Key
  • Default region name

You can simply do this with the following command (replace the values in the mustache brackets) :

banzai secret create <<EOF 
{
    "name": "eks-aws-secret",
    "type": "amazon",
    "values": {
      "AWS_ACCESS_KEY_ID": "{{aws_access_key_id}}",
      "AWS_DEFAULT_REGION": "{{aws_default_region}}",
      "AWS_SECRET_ACCESS_KEY": "{{aws_secret_access_key}}"
    }
  }
EOF
Id                                                                Name     Type    UpdatedBy  Tags
b32343e28d37e09c26d91b4271eaa8dd689b16d9f1aba07fdc73af2a27750309  eks-aws-secret  amazon  lpuskas    []

Alteranatively you could use the --magic flag for creating the secret provided the AWS credentials are available in the local environment:

banzai secret create -t amazon -n eks-aws-secret --magic --tag ""

Create EKS clusters on AWS

Available instance types can be found from our Cloudinfo service. AWS regions where the EKS service is supported can be retrieved from the Cloudinfo service:

curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/eks/regions | jq .

supported EKS versions in a given region can be retrieved with the command:

curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/eks/regions/eu-west-1/versions |jq .

Create a simple EKS cluster on AWS

The command below creates a 3 node EKS cluster on AWS (the master node is provided by the EKS service)

banzai cluster create <<EOF
{
  "name": "eks-cluster",
  "location": "us-west-2",
  "cloud": "amazon",
  "secretName": "eks-aws-secret",
  "properties": {
    "eks": {
      "version": "{{eks-version}}",
      "nodePools": {
        "pool1": {
          "spotPrice": "0",
          "count": 3,
          "minCount": 3,
          "maxCount": 4,
          "autoscaling": true,
          "instanceType": "t2.medium"
        }
      }
    }
  }
}
EOF

Notes:

  • all nodes forming the cluster will be of the same instance type (t2.medium)
  • as autoscaling is turned on, it might happen that on higher loads there will be 4 nodes in the cluster

Create a heterogenous EKS cluster on AWS

The following example creates a heterogenous EKS cluster (the nodePools property has multiple elements)

banzai cluster create <<EOF
{
  "name": "eks-cluster",
  "location": "us-east-2",
  "cloud": "amazon",
  "secretName": "eks-aws-secret",
  "properties": {
    "eks": {
      "version": "{{eks-version}}",
      "nodePools": {
        "pool1": {
          "spotPrice": "0.0464",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "t2.medium"
        },
        "pool2": {
          "spotPrice": "0.1",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "m4.large"
        },
        "pool3": {
          "spotPrice": "0.133",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "r4.large"
        }
      }
    }
  },
}
EOF

Notable differences compared to the first example:

  • this cluster will be formed of nodes of differnt instance types (there are 3 nodepools defined)
  • the number of instances are controlled on a per nodepool basis
  • as the spot prices are filled the cluster will be formed of spot price instances

Create a Multi-AZ EKS into existing VPC and Subnets

The command below creates an EKS cluster with worker nodes distributed across existing subnets supporting various advanced uses cases that require spreading worker nodes across multiple subnets and AZs. The subnets must have appropriate routing set up to allow outbound access to Internet for worker nodes.

banzai cluster create <<EOF
{
  "name": "eks-cluster",
  "location": "us-east-2",
  "cloud": "amazon",
  "secretName": "eks-aws-secret",
  "properties": {
    "eks": {
      "version": "{{eks-version}}",
      "nodePools": {
        "pool1": {
          "spotPrice": "0.06",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "t2.medium",
          "subnet": {
              "subnetId": "{{subnet-us-east-2a-zone}}"
           }
        },
        "pool2": {
          "spotPrice": "0.2",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "c5.large",
          "subnet": {
              "subnetId": "{{subnet-us-east-2b-zone}}"
           }
        },
        "pool3": {
          "spotPrice": "0.2",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "c5.xlarge",
          "subnet": {
              "subnetId": "{{subnet-us-east-2c-zone}}"
          }
        }
      },
      "vpc": {
        "vpcId": "{{vpc-id}}"
      },
      "subnets": [
        {
          "subnetId": "{{subnet-us-east-2a-zone}}"
        },
        {
          "subnetId": "{{subnet-us-east-2b-zone}}"
        },
        {
          "subnetId": "{{subnet-us-east-2c-zone}}"
        }
      ]
    }
  }
}
EOF

The created EKS cluster will make use of the subnets listed under the subnets section. This list should contain also subnets that might be used in the future (e.g. additional subnets that might be used by new node pools the cluster is expanded with later). Node pools are created in the subnet that is specified in for the node pool in the payload. If no subnet specified for a node pool the node pool is created in one of the subnets from the subnets list. Multiple node pools can share

Create a Multi-AZ EKS into new VPC and Subnets

The command below creates an EKS cluster with worker nodes distributed across multiple subnets created by Pipeline. This functionallity only differs from Create a Multi-AZ EKS into existing VPC and Subnets in that the EKS cluster is not created in an existing infrastructure but Pipeline provisions the infrastructure for the cluster.

banzai cluster create <<EOF
{
  "name": "eks-cluster",
  "location": "us-east-2",
  "cloud": "amazon",
  "secretName": "eks-aws-secret",
  "properties": {
    "eks": {
      "version": "{{eks-version}}",
      "nodePools": {
        "pool1": {
          "spotPrice": "0.06",
          "count": 1,
          "minCount": 1,
          "maxCount": 3,
          "autoscaling": true,
          "instanceType": "t2.medium",
          "subnet": {
              "cidr": "192.168.64.0/20",
              "availabilityZone": "us-east-2a"
           }
        },
        "pool2": {
          "spotPrice": "0.2",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "c5.large",
          "subnet": {
              "cidr": "192.168.80.0/20",
              "availabilityZone": "us-east-2b"
           }
        },
        "pool3": {
          "spotPrice": "0.2",
          "count": 1,
          "minCount": 1,
          "maxCount": 2,
          "autoscaling": true,
          "instanceType": "c5.xlarge",
          "subnet": {
              "cidr": "192.168.96.0/20",
              "availabilityZone": "us-east-2c"
          }
        }
      },
      "vpc": {
        "cidr": "192.168.0.0/16"
      },
      "subnets": [
        {
          "cidr": "192.168.64.0/20",
          "availabilityZone": "us-east-2a"
        },
        {
          "cidr": "192.168.80.0/20",
          "availabilityZone": "us-east-2b"
        }
      ]
    }
  }
}
EOF

The VPC and all subnet definitions are collected from the payload and crreated by Pipeline with provided parameters.

Check the status of the cluster

You can check the status of the cluster creation with the following command:

banzai cluster get "eks-cluster"

Once the cluster is ready, you can try it with some simple commands. banzai cluster shell executes a shell within the context of the selected cluster. If you type a command in the shell opened, or pass it as arguments, it will be executed in a prepared environment. For example, you can list the nodes of the cluster using the original kubectl command:

banzai cluster shell --cluster-name "eks-cluster" -- kubectl get nodes

Further steps

If you are happy with the results, go on with the Deploying workload guide to learn about the basic features of a cluster.