As of version 1.6, Kubernetes provides role-based access control (RBAC) so that administrators can set up fine-grained access to a variety of Kubernetes resources. It would take too long to fully explain why it makes sense to use RBAC in this post, but, in a nutshell, RBAC provides a level of control that most enterprises need to meet their security requirements within Kubernetes clusters.
Processes and human operators that assume the identity of a Kubernetes Service Account
will authenticate with said account and gain its associated access rights. An administrator can grant access rights to a service account by using RBAC to control the Kubernetes resources it operates on, and the actions it can carry out inside said resources.
In-cluster processes 🔗︎
In-cluster processes are processes that run inside Pods. When interacting with the Kubernetes API Server, these processes use the service account specified in their pod definition for authentication. If no service account is specified, then they use the default
account.
External processes 🔗︎
These are processes, e.g. automation tools, running outside of the Kubernetes cluster. In order to interact with the Kubernetes API server they need to assume the identity of a service account through which they will be granted the necessary access to Kubernetes resources. External process can use a Kubernetes service account for authentication via service account tokens
. Apart from the difference in the authentication, external processes using a service account are authorized the same way as in-cluster processes.
Human operators 🔗︎
Human operators may execute operations on an RBAC-enabled GKE cluster using the Google Console, or Google Cloud CLI. It is, however, not unheard of for human operators to prefer to use a simple kubectl
command that authenticates as a service account, rather than use their Google Cloud IAM. Just like with external processes, this kind of authentication can be achieved with the use of service account tokens.
How does Pipeline do it? 🔗︎
Banzai Cloud’s Pipeline provisions RBAC-enabled GKE clusters for users, generates a service account with cluster admin privileges, and uses the account to configure and deploy those components that, together, make up the features provided by Pipeline — such as out-of-the-box monitoring, centralized logging, spot/preemptible scheduling, security scans and backups, just to name a few. It also generates a kubeconfig
that human operators can use with kubectl
for authentication via the same service account.
Pipeline requires a Google Cloud credential of the service account
-type (note: this is not a Kubernetes service account) for engaging Google’s API to perform GKE cluster CRUD operations. These credentials are stored securely in Vault.
Once the GKE cluster is created, it starts deploying various components, which interact with the Kubernetes API Server via the kubernetes/client-go library, behind the scenes. This library requires Kubernetes credentials in order for them to interact with the Kubernetes API Server. But, at this point, all we’ll have is a Google Cloud identity of the service account
variety, which will look something like this:
1{
2 "type": "service_account",
3 "project_id": "<your-google-cloudproject-id>",
4 "private_key_id": "....",
5 "private_key": "....",
6 "client_email": "<some-name>@<your-google-cloudproject-id>.iam.gserviceaccount.com",
7 "client_id": "...",
8 "auth_uri": "https://accounts.google.com/o/oauth2/auth",
9 "token_uri": "https://accounts.google.com/o/oauth2/token",
10 "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
11 "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/<some-name>%40<your-google-cloudproject-id>.iam.gserviceaccount.com"
12}
The question, now, is how to make it so that Pipeline passes this credential to the Kubernetes client-go library for authentication, when the client-go is expecting a Kubernetes credential. The answer is to use the Kubernetes client GCP authenticator plugin.
When Pipeline uses the GCP authenticator, it asks for a short-lived authentication token for the purposes of authenticating to the Kubernetes API Server of a GKE cluster.
1...
2googleCredentials, err := google.CredentialsFromJSON(ctx, googleServiceAccountJSON, "https://www.googleapis.com/auth/cloud-platform")
3if err != nil {
4 return "", err
5}
6ctx := context.Background()
7credentialsClient, err :=
8 credentials.NewIamCredentialsClient(ctx, option.WithCredentials(googleCredentials))
9
10if err != nil {
11 eturn nil, err
12}
13
14defer credentialsClient.Close()
15
16// requires Service Account Token Creator and Service Account User IAM roles
17req := credentialspb.GenerateAccessTokenRequest{
18 Name: fmt.Sprintf("projects/-/serviceAccounts/%s", serviceAccountEmailOrId),
19 Lifetime: &duration.Duration{
20 Seconds: 600, // token expires after 10 mins
21 },
22 Scope: []string{
23 "https://www.googleapis.com/auth/cloud-platform",
24 "https://www.googleapis.com/auth/userinfo.email",
25 },
26}
27
28tokenResp, err := credentialsClient.GenerateAccessToken(ctx, &req)
29if err != nil {
30 return nil, err
31}
32
33return tokenResp //
The resultant GenerateAccessTokenResponse has an AccessToken
field, which contains an OAuth 2.0 access token that we can use for authentication with the Kubernetes API server. Note that the requested token in the above snippet will expire after 10 minutes.
1// requires Service Account Token Creator and Service Account User IAM roles
2req := credentialspb.GenerateAccessTokenRequest{
3 Name: fmt.Sprintf("projects/-/serviceAccounts/%s", serviceAccountEmailOrId),
4 Lifetime: &duration.Duration{
5 Seconds: 600, // token expires after 10 mins
6 },
7 Scope: []string{
8 "https://www.googleapis.com/auth/cloud-platform",
9 "https://www.googleapis.com/auth/userinfo.email",
10 },
11}
For access to the above mentioned Google API calls, the Google service account requires
Service Account Token Creator
andService Account User
roles.
Also, Google projects require access to use the
IAM Service Account Credentials API
API service to be enabled
We can now construct the Kubernetes client configuration, using our OAuth 2.0 token:
1...
2tokenExpiry := time.Unix(tokenResp.GetExpireTime().GetSeconds(), int64(tokenResp.GetExpireTime().GetNanos()))
3
4// kubernetes config using GCP authenticator
5config := &rest.Config{
6 Host: host, // API Server address
7 TLSClientConfig: rest.TLSClientConfig{
8 CAData: capem, // API Server CA Cert
9 },
10 AuthProvider: &api.AuthProviderConfig{
11 Name: "gcp",
12 Config: map[string]string{
13 "access-token": tokenResp.GetAccessToken(),
14 "expiry": tokenExpiry.Format(time.RFC3339Nano),
15 },
16 },
17}
18clientset, err := kubernetes.NewForConfig(config)
19if err != nil {
20 return err
21}
Note the AuthProviderConfig
config literal, here, through which we can specify the type of authenticator to be used — gcp
— and its access-token
and expiry
configuration fields.
With this Kubernetes client config, Pipeline can interact with the API Server and create a Kubernetes service account:
1...
2serviceAccount := &v1.ServiceAccount{
3 ObjectMeta: metav1.ObjectMeta{
4 Name: "cluster-admin-sa",
5 },
6}
7
8_, err := clientset.CoreV1().ServiceAccounts("default").Create(serviceAccount)
9if err != nil && !errors.IsAlreadyExists(err) {
10 return err
11}
12...
After that, we create a cluster role that grants cluster admin privileges:
1...
2clusterAdmin := "cluster-admin"
3adminRole := &v1beta1.ClusterRole{
4 ObjectMeta: metav1.ObjectMeta{
5 Name: clusterAdmin,
6 },
7 Rules: []v1beta1.PolicyRule{
8 {
9 APIGroups: []string{"*"},
10 Resources: []string{"*"},
11 Verbs: []string{"*"},
12 },
13 {
14 NonResourceURLs: []string{"*"},
15 Verbs: []string{"*"},
16 },
17 },
18}
19clusterAdminRole, err := clientset.RbacV1beta1().ClusterRoles().Get(clusterAdmin, metav1.GetOptions{})
20if err != nil {
21 clusterAdminRole, err = clientset.RbacV1beta1().ClusterRoles().Create(adminRole)
22 if err != nil {
23 return err
24 }
25}
26...
Bind the cluster role to a service account:
1...
2clusterRoleBinding := &v1beta1.ClusterRoleBinding{
3 ObjectMeta: metav1.ObjectMeta{
4 Name: "cluster-admin-sa-clusterRoleBinding",
5 },
6 Subjects: []v1beta1.Subject{
7 {
8 Kind: "ServiceAccount",
9 Name: serviceAccount.Name,
10 Namespace: "default",
11 APIGroup: v1.GroupName,
12 },
13 },
14 RoleRef: v1beta1.RoleRef{
15 Kind: "ClusterRole",
16 Name: clusterAdminRole.Name,
17 APIGroup: v1beta1.GroupName,
18 },
19}
20if _, err = clientset.RbacV1beta1().ClusterRoleBindings().Create(clusterRoleBinding); err != nil && !k8sErrors.IsAlreadyExists(err) {
21 return err
22}
23...
Now that a Kubernetes service account has been created with the cluster admin role, Pipeline will generate a Kubernetes config JSON that users can download to use with kubectl
to authenticate as a cluster admin. For this, Pipeline has to pull the access token of the Kubernetes service account.
Kubernetes creates long-lived access tokens for service accounts, and stores them as Kubernetes secrets. The names of these secrets can be found in the ServiceAccount resources.
1...
2if serviceAccount, err = clientset.CoreV1().ServiceAccounts("default").Get("cluster-admin-sa", metav1.GetOptions{}); err != nil {
3 return err
4}
5
6if len(serviceAccount.Secrets) > 0 {
7 secret := serviceAccount.Secrets[0]
8 secretObj, err := clientset.CoreV1().Secrets(defaultNamespace).Get(secret.Name, metav1.GetOptions{})
9 if err != nil {
10 return err
11 }
12 if token, ok := secretObj.Data["token"]; ok {
13 return string(token), nil
14 }
15}
16...
The generated Kubernetes client config JSON:
1apiVersion: v1
2clusters:
3...
4contexts:
5- context:
6 ...
7users:
8- name: <user-name>
9 user:
10 token: <the value taken from the 'token' field of the secret of the service account>
11...
12kind: Config
Additionaly, less privileged Kubernetes service accounts can be created in a similar way in order to grant limited access to GKE clusters.
The following diagram shows the flow described above:
About Banzai Cloud Pipeline 🔗︎
Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.