This section describes the values you can use in a cluster descriptor file when creating Amazon Elastic Kubernetes Service (EKS) clusters. This file specifies the properties of the cluster you want to create. For details on how to create the cluster, see Create cluster — Amazon Elastic Kubernetes Service (EKS)
name (string) 🔗︎
The name of the cluster. It must be unique within the Pipeline instance. If empty, Pipeline generates the name automatically, for example: my-test-cluster-2020-07-22
location (string) 🔗︎
The region of the cluster, for example: us-east-2
. To list the regions where EKS is available, you can use the following command:
curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/eks/regions | jq .
cloud (string) 🔗︎
The name of the cloud provider to use. In this scenario, it must be “amazon”.
secretName (string) 🔗︎
The name of an existing Pipeline secret to use, for example, my-eks-aws-secret
. If you don’t already have a secret, create one. For details, see Create an AWS secret.
properties (object) 🔗︎
The detailed specification of the cluster. For EKS clusters, it must include the ‘eks’ object. For example:
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
},
properties.eks (object) 🔗︎
The detailed specification of the EKS cluster. For example:
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
},
The following properties are required:
Customize access rights - properties.eks.authConfig (object) 🔗︎
When creating a new cluster, Pipeline automatically sets the access rights of the cluster using a configuration map (for details on the configuration map, see the official AWS documentation).
Pipeline sets the following access rights by default:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <<NodeInstanceRoleArn>>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: <<ClusterUserArn>>
username: <<UserNameFromArn>>
groups:
- system:masters
To customize and extend the default access rights, you can include an authConfig
section in the cluster definition JSON file. Pipeline merges the default and custom configuration maps and applies the result to the cluster. Note that the custom authConfig
takes precedence, and overrides the default ConfigMap settings.
In the authConfig
section, include the mapRoles
, mapUsers
, and mapAccounts
sections of the AWS ConfigMap object as a JSON list. For details, see the official AWS documentation.
For example, if the YAML of your custom access roles looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::111122223333:role/doc-test-worker-nodes-NodeInstanceRole-WDO5P42N3ETB
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
Then authConfig
section in the cluster create JSON looks like this:
"properties": {
"eks": {
"authConfig": {
"mapRoles": [
{
"rolearn": "arn:aws:iam::111122223333:role/doc-test-worker-nodes-NodeInstanceRole-WDO5P42N3ETB",
"username": "system:node:{{EC2PrivateDNSName}}",
"groups": [
"system:bootstrappers",
"system:nodes"
]
}
]
}
}
}
The following example creates a simple cluster using the previous authConfig
section:
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"authConfig": {
"mapRoles": [
{
"rolearn": "arn:aws:iam::111122223333:role/doc-test-worker-nodes-NodeInstanceRole-WDO5P42N3ETB",
"username": "system:node:{{EC2PrivateDNSName}}",
"groups": [
"system:bootstrappers",
"system:nodes"
]
}
]
},
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
}
}
}
}
You can also create similar configuration maps for user accounts:
"properties": {
"eks": {
"authConfig": {
"mapUsers": [
{
"userarn": "arn:aws:iam::{{AccountID}}:user/sample.user",
"username": "sample-user",
"groups": [
"system:masters",
"system:nodes"
],
"name": "aws-auth",
"namespace": "kube-system"
}
]
}
}
}
mapAccounts (list) 🔗︎
The contents of an aws-auth ConfigMap in JSON format, mapAccounts property, for example:
"properties": {
"eks": {
"authConfig": {
"mapAccounts": [
"account1",
"account2"
]
}
}
}
mapRoles (list) 🔗︎
The contents of an aws-auth ConfigMap in JSON format, containing the following properties:
- rolearn: The ARN of the IAM role to add.
- username: The user name within Kubernetes to map to the IAM role.
- groups: A list of groups within Kubernetes to which the role is mapped.
For example:
"properties": {
"eks": {
"authConfig": {
"mapRoles": [
{
"rolearn": "arn:aws:iam::111122223333:role/doc-test-worker-nodes-NodeInstanceRole-WDO5P42N3ETB",
"username": "system:node:{{EC2PrivateDNSName}}",
"groups": [
"system:bootstrappers",
"system:nodes"
]
}
]
}
}
}
mapUsers (list) 🔗︎
The contents of an aws-auth ConfigMap in JSON format, containing the following properties:
- userarn: The ARN of the IAM user to add.
- username: The user name within Kubernetes to map to the IAM user.
- groups: A list of groups within Kubernetes to which the user is mapped to.
For example:
"properties": {
"eks": {
"authConfig": {
"mapUsers": [
{
"userarn": "arn:aws:iam::{{AccountID}}:user/sample.user",
"username": "sample-user",
"groups": [
"system:masters",
"system:nodes"
],
"name": "aws-auth",
"namespace": "kube-system"
}
]
}
}
}
properties.eks.apiServerAccessPoints (object) 🔗︎
Lists the access point types for the API server: public
or private
.
Default: public
Envelope encryption - properties.eks.encryptionConfig (list) 🔗︎
Configures additional encryption using the provided customer master keys (CMK) for the specified resources.
The following example creates an EKS cluster on AWS and adds an additional
encryption layer to the secrets
resources using the CMK identified by the
arn:aws:kms:us-west-2::{{AccountID}}:key/{{KeyID}}
ARN.
banzai cluster create <<EOF
{
"name": "eks-cluster-with-envelope-encryption",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"encryptionConfig": [
{
"provider": {
"keyARN": "arn:aws:kms:us-west-2::{{AccountID}}:key/{{KeyID}}"
},
"resources": [
"secrets"
]
}
],
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
}
}
}
}
EOF
To use a CMK from an external account, roles or users other than the CMK owner, both of the following additional permissions are required.
- The key policy for the CMK in the owner account - determining who can have access to the CMK - MUST give the external account, role or user permission to use the CMK.
- IAM policies in the external account - determining who does have access to the CMK - MUST delegate the key policy permissions to its corresponding roles and users.
To allow an external user to use the CMK with AWS services that integrate with AWS KMS, you need to attach IAM policies to the identity that gives the user permission to use the AWS service. You might also need other additional permissions added to the key policy or the IAM policy which are described in the particular AWS service documentation.
Permit access to CMK for an external account, role or user 🔗︎
You must attach a key policy to a CMK to allow its use to an external account, role or user.
The following key policy allows the account with ID 444455556666
to use the
CMK the policy is attached to.
{
"Sid": "Allow an external account to use this CMK",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::444455556666:root"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
}
The following key policy allows the role ExampleRole
and user ExampleUser
both from the account 444455556666
to use the CMK the policy is attached to.
{
"Sid": "Allow an external account to use this CMK",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::444455556666:role/ExampleRole",
"arn:aws:iam::444455556666:user/ExampleUser"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
}
Permit access to CMK for a role or user 🔗︎
You must attach an IAM policy to the role or user in the external account wanting to use the CMK.
The following IAM policy allows access to the CMK identified by the ARN
arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
in
the account with ID 111122223333
for the role or user the policy is attached
to.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow Use Of CMK In Account 111122223333",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
}
]
}
More on envelope encryption: 🔗︎
- AWS envelope encryption
- AWS customer master key management
- AWS encryption configuration
- AWS envelope encryption CMK requirements
- AWS external account, role or user permissions to use a CMK
properties.eks.iam (object) 🔗︎
Specifies the user or IAM role Pipeline uses to create the cluster.
clusterRoleId (string) 🔗︎
The identifier of an existing IAM role to be used for creating the EKS cluster. If not provided, a new IAM role is created for the cluster (requires IAM Write Access).
nodeInstanceRoleId (string) 🔗︎
The identifier of an existing IAM role to be used for creating the EKS nodes. If not provided, a new IAM role is created for the nodes (requires IAM Write Access).
defaultUser (true | false) 🔗︎
If true, the userid associated with the AWS secret of the cluster is used in the kubeconfig. In this case, no IAM user is created.
Default: false
Cluster logging - properties.eks.logTypes (list) 🔗︎
Configures control plane logging for the cluster. Note that control plane logs are disabled by default.
The following cluster control plane log types are available, corresponding to the component of the Kubernetes control plane.
api
: Kubernetes API server component logs. For details, see kube-apiserver in the official Kubernetes documentation.audit
: Kubernetes audit logs. For details, see Auditing in the official Kubernetes documentation.authenticator
: Authenticator logs represent the control plane component that Amazon EKS uses for Kubernetes Role Based Access Control (RBAC) authentication using IAM credentials. For details, see Cluster authentication in the official Amazon EKS documentation.controllerManager
: The controller manager that manages the core control loops of Kubernetes. For details, see kube-controller-manager in the official Kubernetes documentation.scheduler
: The scheduler manages the pods in your cluster. For details, see kube-scheduler in the official Kubernetes documentation.
The following example creates a 3-node EKS cluster and enables all the available control plane log types.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
},
"logTypes": ["api", "audit", "authenticator", "controllerManager", "scheduler"]
}
}
}
EOF
Node pools - properties.eks.nodePools (object) 🔗︎
The properties.eks.nodePools object defines the node pools and the nodes of the cluster. Each node pool has an identifier (for example, pool1, pool2, or something more meaningful), and contains the properties of the node pool.
The following properties are required:
For example:
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"minCount": 3,
"maxCount": 4,
"instanceType": "t2.medium"
}
}
}
}
autoscaling (true | false) 🔗︎
If autoscaling is enabled, the number of nodes in the node pool can be increased and decreased according to the load of the cluster. You can set the minimum and maximum number of nodes in the minCount
and maxCount
values, respectively.
count (integer) 🔗︎
The number of nodes in the node pool. Note that if you enable autoscaling, the actual number of nodes may be different from this number, depending on the load of the cluster.
iam (string) 🔗︎
The identifier of an EKS IAM object. For details, see properties.eks.iam.
image (string) 🔗︎
The ID of the machine image to run on the node, for example, ami-06d1667f.
instanceType (string) 🔗︎
The type of virtual computer to use for the node, for example, “t2.medium” or “c5.large”.
You can check the available regions and instance types in our Cloudinfo service.
labels (object) 🔗︎
A list of labels to add to the nodes of the node pool. For example:
{"example.io/label1":"value1"}
maxCount (integer) 🔗︎
The maximum number of nodes in the node pool. If autoscaling is enabled and the load of the cluster is high, the number of nodes can be increased to this level.
minCount (integer) 🔗︎
The minimum number of nodes in the node pool. If autoscaling is enabled and the load of the cluster is low, the number of nodes can be decreased to this level.
securityGroups (list) 🔗︎
By default, Pipeline adds the node pools it creates to a security group that allows the following actions:
- Allow node to communicate with each other
- Allow pods to communicate with the cluster API Server
- Allow the cluster control plane to communicate with worker Kubelet and pods
- Allow worker Kubelets and pods to receive communication from the cluster control plane
- Allow SSH access to node
Use the securityGroups
option to list additional security groups to add to the nodes of the node pool. For example:
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"securityGroups": ["sg-00000xxxx0000xxx1", "sg-00000xxxx0000xxx2"],
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "t2.medium",
"subnet": {
"subnetId": "{{subnet-us-east-2a-zone}}"
}
}
}
}
You can also modify the security groups of an existing node pool if you update the node pool using the Pipeline API. In this case, the new securityGroups
values overwrite the old values. To delete the custom security groups, submit an empty list to securityGroups
.
spotPrice (float) 🔗︎
If not zero, this value shows the price (in USD) that you are willing to pay to run the node pool on spot instances. If zero, the node pool uses only on-demand instances. Spot instances allow you to run use the unused EC2 capacity in the AWS cloud at discount compared to On-Demand prices.
subnet (string) 🔗︎
Specifies the identifier of an existing subnet, or the CIDR (and optionally, the availability zone) of a new subnet where the node pool is created. For example:
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0.06",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "t2.medium",
"subnet": {
"subnetId": "{{subnet-us-east-2a-zone}}"
}
},
"pool2": {
"spotPrice": "0.2",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "c5.large",
"subnet": {
"subnetId": "{{subnet-us-east-2b-zone}}"
}
},
}
}
useInstanceStore (true | false) 🔗︎
Use instance store volumes (NVMe disks) for the node pool. As a result, the root directory of Kubelet and the emptyDir volumes of the pods are provisioned on local instance storage disks. Pipeline automatically initializes the instance store volumes if needed.
Note: If the instance type does not support instance store volumes, Pipeline simply ignores the useInstanceStore settings. For the list of instance types that support instance store volumes, see the AWS documentation.
Default: false
volumeEncryption (object) 🔗︎
An object describing the node pool nodes’ EBS volume encryption setting with a boolean switch and optionally a KMS encryption key.
Omitting the volume encryption setting causes a fallback to the Pipeline control plane configuration value or the AWS account default value in that order of precedence.
Omitting the encryption key results in using the default AWS encryption key associated with the corresponding account.
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0.06",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "t2.medium",
"volumeEncryption": {
"enabled": true,
"encryptionKeyARN": "arn:aws:kms:{{aws-region}}:{{aws-account-id}}:key/{{aws-kms-key-id}}"
}
}
}
}
**If you specify a customer managed key (CMK) for Amazon EBS volume encryption for a node pool with autoscaling enabled,
- once per AWS account, you must create the appropriate service-linked role
(
AWSServiceRoleForAutoScaling
), and - for every CMK, you must give the service-linked role access to the CMK so Amazon EC2 Auto Scaling can launch instances on your behalf.**
The process of creating the service-linked role is described at the EKS minimal privileges, see (Minimal privileges required for Pipeline in EKS).
For a CMK to use with EBS node pool node volume encryption - where the CMK and the autoscale group are in the same account - add the following permissions to the key policy of the CMK:
{
"Sid": "Allow service-linked role use of the CMK (in the same account)",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::{{aws-account-id}}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
}
{
"Sid": "Allow attachment of persistent resources (in the same account)",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::{{aws-account-id}}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
]
},
"Action": [
"kms:CreateGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
}
}
}
For a CMK to use with EBS node pool node volume encryption - where the CMK and the autoscale group are in different accounts - add the following permissions to the key policy of the CMK:
{
"Sid": "Allow external account {{aws-autoscale-group-account-id}} use of the CMK",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::{{aws-autoscale-group-account-id}}:root"
]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
}
{
"Sid": "Allow attachment of persistent resources in external account {{aws-autoscale-group-account-id}}",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::{{aws-autoscale-group-account-id}}:root"
]
},
"Action": [
"kms:CreateGrant"
],
"Resource": "*"
}
and also create a grant from the external account that delegates the relevant permissions to the appropriate service-linked role:
aws kms create-grant \
--region {{aws-region}} \
--key-id arn:aws:kms:{{aws-region}}:{{aws-cmk-account-id}}:key/{{aws-kms-key-id}} \
--grantee-principal arn:aws:iam::{{aws-autoscale-group-account-id}}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling \
--operations "Encrypt" "Decrypt" "ReEncryptFrom" "ReEncryptTo" "GenerateDataKey" "GenerateDataKeyWithoutPlaintext" "DescribeKey" "CreateGrant"
For this operation to succeed, the executing user or role must have the
following IAM policy permission in the autoscale group account to perform the
CreateGrant
action in the CMK owning account:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow CreateGrant for the CMK in external account {{aws-cmk-account-id}}",
"Effect": "Allow",
"Action": "kms:CreateGrant",
"Resource": "arn:aws:kms:{{aws-region}}:{{aws-cmk-account-id}}:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d"
}
]
}
You can find more information on EBS encryption CMK usage and permissions at the corresponding AWS documentation.
volumeSize (integer) 🔗︎
The explicit size of the EBS volume in GiB to allocate for each node in the node pool. When no explicit node volume size is specified, the Pipeline instance’s default EKS node volume size configuration value is used instead. When neither an explicit node volume size is requested, nor a Pipeline instance default EKS node volume size is configured, the volume size will be equal to the AMI size, but at least 50 GiB.
In every case, the node volume size MUST be greater than or equal to the
size of the Amazon Machine Image (AMI) specified at the image
property or the
default image when no explicit image ID is provided.
Example: a cluster with a single node pool 🔗︎
The following command creates a 3-node EKS cluster on AWS (the master node is provided by the EKS service) with a single node pool called “pool1”. All nodes of the cluster have the same instance type (t2.medium). Autoscaling is enabled, so under high load the cluster will have 4 nodes.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
}
}
}
}
EOF
Example: multiple node pools on spot instances 🔗︎
The following example creates a heterogenous EKS cluster with three different node pools (called pool1, pool2, and pool3). The node pools use different instance types. The cluster will be created from spot instances, because the spotPrice
option is set for the node pools.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-east-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0.0464",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "t2.medium"
},
"pool2": {
"spotPrice": "0.1",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "m4.large"
},
"pool3": {
"spotPrice": "0.133",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "r4.large"
}
}
}
},
}
EOF
Example: explicitly set volume size 🔗︎
The following command creates a 3-node EKS cluster on AWS (the master node is
provided by the EKS service) with a single node pool called “pool1”. All nodes
of the cluster have the same instance type (t2.medium). Autoscaling is disabled.
Each node’s volume size in the node pool pool1
is set to 20 GiB.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"autoscaling": false,
"volumeSize": 20,
"instanceType": "t2.medium",
}
}
}
}
}
EOF
volumeType (string) 🔗︎
The explicit type of the EBS volume to allocate for each node in the node pool. When no explicit node volume type is specified, a default volume type is used.
Example: explicitly set volume type 🔗︎
The following command creates a 3-node EKS cluster on AWS (the master node is
provided by the EKS service) with a single node pool called “pool1”. All nodes
of the cluster have the same instance type (t2.medium). Autoscaling is disabled.
Each node’s volume type in the node pool pool1
is set to gp3
.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"autoscaling": false,
"volumeType": "gp3",
"instanceType": "t2.medium",
}
}
}
}
}
EOF
Routing for existing subnets - properties.eks.routeTableId (string) 🔗︎
The identifier of the route table of the VPC to be used by subnets. Note that the route table is used only when subnets are created into an existing VPC.
Subnets - properties.eks.subnets (object) 🔗︎
The properties.eks.subnets object contains the details of the subnets where the cluster will be created. It contains a list of subnets: reference existing subnets using their ID, or specify new subnets using their availability zone and CIDR. You must list at least two subnets. All worker nodes are launched in the same subnet (the first subnet in the list - which is not necessarily the first subnet in the cluster create request payload, as the deserialization may change the order), unless a subnet is specified for the workers that belong to a node pool at the node pool level.
For example:
"subnets": [
{
"subnetId": "{{subnet-us-east-2a-zone}}"
},
{
"subnetId": "{{subnet-us-east-2b-zone}}"
},
{
"subnetId": "{{subnet-us-east-2c-zone}}"
}
]
or
"subnets": [
{
"cidr": "192.168.64.0/20",
"availabilityZone": "us-east-2a"
},
{
"cidr": "192.168.80.0/20",
"availabilityZone": "us-east-2b"
}
]
availabilityZone (string) 🔗︎
The Availability Zone (AZ) where the subnet is created.
subnetId (string) 🔗︎
The identifier of an existing subnet to be used for creating the EKS cluster. If subnetId
is not provided, a new subnet is created for the cluster. When creating a new subnet, you can specify the CIDR range for the subnet.
cidr (string) 🔗︎
The CIDR range for the subnet when a new subnet is created. If not provided, the default value is used: 192.168.0.0/16.
Add custom tags - properties.eks.tags (object) 🔗︎
In addition to the usual Pipeline-managed tags, you can add custom tags (key-value pairs) to the cluster. The following command creates a 3-node EKS cluster. In addition to the usual Pipeline-managed tags, it also adds customTag1 and customTag2 to the resources created by CloudFormation stacks used to create the cluster.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-west-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0",
"count": 3,
"minCount": 3,
"maxCount": 4,
"autoscaling": true,
"instanceType": "t2.medium"
}
},
"tags": {
"customTag1": "customValue1",
"customTag2": "customValue2"
}
}
}
}
EOF
properties.eks.version (string) 🔗︎
The Kubernetes version to run on the cluster. To list the supported versions and the default version in a given region, run the following command:
curl https://banzaicloud.com/cloudinfo/api/v1/providers/amazon/services/eks/regions/eu-west-1/versions | jq .
For the list of supported versions for different cloud providers, see Provider support.
Virtual Private Cloud (VPC) - properties.eks.vpc (object) 🔗︎
The properties.eks.vpc object contains the details of the VPC where the cluster will be created.
"vpc": {
"vpcId": "{{vpc-id}}"
},
vpcId (string) 🔗︎
The identifier of an existing Virtual Private Cloud (VPC) to be used for creating the EKS cluster. If vpcId
is not provided, a new VPC is created for the cluster. When creating a new VPC, you can specify the CIDR range for the VPC.
cidr (string) 🔗︎
The CIDR range for the VPC when a new VPC is created. If not provided, the default value is used: 192.168.0.0/16.
Example: Multi-AZ cluster in existing VPC and Subnets 🔗︎
The following example creates an EKS cluster with worker nodes distributed across existing subnets supporting various advanced use cases that require spreading worker nodes across multiple subnets and AZs.
The cluster uses the subnets listed under the subnets
section. This list should contain also subnets that might be used in the future (for example, additional subnets that might be used by new node pools the cluster is expanded with later).
Node pools are created in the subnet that is specified for the node pool in the nodePools.{{node-pool-id}}.subnet.subnetId
section. If you don’t specify a subnet for a node pool, the node pool is created in one of the subnets from the subnets
list. Multiple node pools can share the same subnet.
Note: The subnets must have appropriate routing set up to allow outbound access to the Internet for worker nodes.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-east-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0.06",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "t2.medium",
"subnet": {
"subnetId": "{{subnet-us-east-2a-zone}}"
}
},
"pool2": {
"spotPrice": "0.2",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "c5.large",
"subnet": {
"subnetId": "{{subnet-us-east-2b-zone}}"
}
},
"pool3": {
"spotPrice": "0.2",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "c5.xlarge",
"subnet": {
"subnetId": "{{subnet-us-east-2c-zone}}"
}
}
},
"vpc": {
"vpcId": "{{vpc-id}}"
},
"subnets": [
{
"subnetId": "{{subnet-us-east-2a-zone}}"
},
{
"subnetId": "{{subnet-us-east-2b-zone}}"
},
{
"subnetId": "{{subnet-us-east-2c-zone}}"
}
]
}
}
}
EOF
Example: Multi-AZ cluster in new VPC and Subnets 🔗︎
The following example creates an EKS cluster with worker nodes distributed across multiple subnets created by Pipeline. This functionality only differs from Create a Multi-AZ EKS into existing VPC and Subnets in that the EKS cluster is not created in an existing infrastructure (because vpcId and subnetId are not provided), but Pipeline provisions the infrastructure for the cluster.
The VPC and all subnet definitions are collected from the payload and created by Pipeline with the provided parameters.
banzai cluster create <<EOF
{
"name": "eks-cluster",
"location": "us-east-2",
"cloud": "amazon",
"secretName": "eks-aws-secret",
"properties": {
"eks": {
"version": "{{eks-version}}",
"nodePools": {
"pool1": {
"spotPrice": "0.06",
"count": 1,
"minCount": 1,
"maxCount": 3,
"autoscaling": true,
"instanceType": "t2.medium",
"subnet": {
"cidr": "192.168.64.0/20",
"availabilityZone": "us-east-2a"
}
},
"pool2": {
"spotPrice": "0.2",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "c5.large",
"subnet": {
"cidr": "192.168.80.0/20",
"availabilityZone": "us-east-2b"
}
},
"pool3": {
"spotPrice": "0.2",
"count": 1,
"minCount": 1,
"maxCount": 2,
"autoscaling": true,
"instanceType": "c5.xlarge",
"subnet": {
"cidr": "192.168.96.0/20",
"availabilityZone": "us-east-2c"
}
}
},
"vpc": {
"cidr": "192.168.0.0/16"
},
"subnets": [
{
"cidr": "192.168.64.0/20",
"availabilityZone": "us-east-2a"
},
{
"cidr": "192.168.80.0/20",
"availabilityZone": "us-east-2b"
}
]
}
}
}
EOF