Banzai Cloud is now part of Cisco

Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact

The content of this page hasn't been updated for years and might refer to discontinued products and projects.

Check out Backyards in action on your own clusters!

Register for a free version

Want to know more? Get in touch with us, or delve into the details of the latest release.

Or just take a look at some of the Istio features that Backyards automates and simplifies for you, and which we’ve already blogged about.

At Banzai Cloud we are building a feature rich enterprise-grade application and devops container management platform, called Pipeline and a CNCF certified Kubernetes distribution, PKE. Security is one of our main areas of focus, and we strive to automate and enable those security patterns we consider essential for all the enterprises that use Pipeline. For us, Istio is no exception, in that we apply the best available security practices to the service mesh, while maintaining the sleekest, most automated user experience possible.

Here are a few posts from our security series:

Authentication and authorization of Pipeline users with OAuth2 and Vault
Dynamic credentials with Vault using Kubernetes Service Accounts
Dynamic SSH with Vault and Pipeline
Secure Kubernetes Deployments with Vault and Pipeline
Policy enforcement on K8s with Pipeline
The Vault swiss-army knife
The Banzai Cloud Vault Operator
Vault unseal flow with KMS
Kubernetes secret management with Pipeline
Container vulnerability scans with Pipeline
Kubernetes API proxy with Pipeline

The inclusion of Istio in our platform has been among our most frequently requested features over the last few months. Several weeks ago we released our open source Istio operator, designed to help ease the somewhat difficult task of managing Istio. Since its initial release we’ve added quite a few new features, including Istio 1.1 support, single mesh multi-cluster support, autoscaling based on Istio metrics and a lot more - for a complete list, check our Istio operator’s GitHub page.

tl;dr πŸ”—︎

The latest version of the Banzai Cloud Istio operator supports the Istio CNI plugin, which renders usage of privileged Istio init containers obsolete.

Manipulating traffic flow πŸ”—︎

In order for the service mesh to work, Istio needs to place an Envoy proxy inside every pod in the mesh, then must manipulate the pod’s traffic flow via iptables rules, in order to redirect traffic to/from the application through the injected Envoy proxy. Because the iptables rules for every pod are network-namepsaced, the changes won’t affect other pods on the node.

By default, Istio uses an injected initContainer called istio-init to create the necessary iptables rules before the other containers in the pod start. This requires the user or service-account deploying pods in the mesh to have sufficient privileges to deploy other containers with NET_ADMIN capability - a kernel capability that allows the reconfiguration of networks. In general, we want to avoid giving application pods this capability.

Istio without CNI plugin

One way of mitigating this issue is to take iptables’ rules configuration out of the pod using a CNI plugin.

What is CNI πŸ”—︎

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.

The Istio CNI plugin replaces the istio-init container, which provides the same functionality, but without requiring Istio users to enable elevated privileges. It performs traffic redirection in the pod’s network setup phase, thereby removing the NET_ADMIN capability requirement for users deploying pods into the Istio mesh.

Prerequisites πŸ”—︎

  • Install Kubernetes with container runtime support for CNI and kubelet configured so that the main CNI plugin is enabled via --network-plugin=cni
    • Banzai Cloud PKE supports this out-of-the-box
    • Kubernetes installations for AWS EKS, Azure AKS, and IBM Cloud IKS clusters also share this capability
    • Google Cloud GKE clusters require that the network-policy feature be enabled in order for Kubernetes to be configured with --network-plugin=cni

If you’re interested in using this feature with a single click on up to 6 different cloud providers or on-premise, look no further; use the Pipeline platform for free.

Identifying pods that require traffic redirection πŸ”—︎

The Istio CNI plugin finds pods that require traffic redirection by checking them against a list of the following conditions:

  • The pod is NOT in a namespace in the configured excludeNamespaces list
  • The pod has a container named istio-proxy
  • The pod has more than 1 container
  • The pod has no annotation with key sidecar.istio.io/inject OR the value of the annotation is true

Istio’s CNI plugin operates as a chained CNI plugin. This means its configuration is added to the existing CNI plugin’s configuration as a new configuration list element. See the CNI’s specifications for further details.

Istio with CNI plugin

Demo using Banzai Cloud PKE πŸ”—︎

The best way of exploring this plugin and finding out exactly how it works is to spin up a Banzai Cloud PKE cluster using our CLI tool for Pipeline, simply called banzai.

~ ❯ banzai cluster create <<EOF
{
  "name": "istio-cni-demo-1290",
  "location": "eu-central-1",
  "cloud": "amazon",
  "secretName": "aws-secret",
  "properties": {
    "pke": {
      "nodepools": [
        {
          "name": "master",
          "roles": ["master"],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "master",
              "zones": ["eu-central-1a"],
              "instanceType": "c5.large",
              "launchConfigurationName": "master",
              "size": { "desired": 1, "min": 1, "max": 1 }
            }
          }
        },
        {
          "name": "system",
          "roles": ["pipeline-system"],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "system",
              "zones": ["eu-central-1a"],
              "instanceType": "t2.medium",
              "launchConfigurationName": "system",
              "size": { "desired": 1, "min": 1, "max": 1 }
            }
          }
        },
        {
          "name": "pool2",
          "roles": ["worker"],
          "provider": "amazon",
          "autoscaling": false,
          "providerConfig": {
            "autoScalingGroup": {
              "name": "pool2",
              "zones": ["eu-central-1a"],
              "instanceType": "t2.medium",
              "launchConfigurationName": "pool2",
              "size": { "desired": 1, "min": 1, "max": 1 }
            }
          }
        }
      ],
      "kubernetes": {"version": "1.12.2", "rbac": { "enabled": true }},
      "cri": {"runtime": "containerd"}
    }
  }
}
EOF

INFO[0004] cluster is being created
INFO[0004] you can check its status with the command `banzai cluster get "istio-cni-demo-1290"`
Id   Name
447  istio-cni-demo-1290

After a quick coffee break your cluster will be up and running πŸ”—︎

~ ❯ banzai cluster get "istio-cni-demo-1290"
Id   Name                 Distribution  Status   StatusMessage
447  istio-cni-demo-1290  pke           RUNNING  Cluster is running

~ ❯ banzai cluster shell --cluster-name istio-cni-demo-1290
INFO[0004] Running /bin/zsh

~ [istio-cni-demo-1290] ❯ kubectl get nodes
NAME                                              STATUS   ROLES    AGE     VERSION
ip-192-168-67-149.eu-central-1.compute.internal   Ready    <none>   5m42s   v1.12.2
ip-192-168-74-53.eu-central-1.compute.internal    Ready    <none>   5m51s   v1.12.2
ip-192-168-79-123.eu-central-1.compute.internal   Ready    master   9m3s    v1.12.2

Install the Banzai Cloud Istio operator with Helm πŸ”—︎

You can do all these things using the Pipeline UI, as well

~ [istio-cni-demo-1290] ❯ helm repo add banzaicloud-stable http://kubernetes-charts.banzaicloud.com/branch/master
~ [istio-cni-demo-1290] ❯ "banzaicloud-stable" has been added to your repositories
~ [istio-cni-demo-1290] ❯ helm install --name=istio-operator --namespace=istio-system banzaicloud-stable/istio-operator
NAME:   istio-operator
LAST DEPLOYED: Tue Mar 26 20:24:32 2019
NAMESPACE: istio-system
STATUS: DEPLOYED

~ [istio-cni-demo-1290] ❯ kubectl -n istio-system get pods
NAME                        READY   STATUS    RESTARTS   AGE
istio-operator-operator-0   2/2     Running   0          3m27s

Apply the following Istio CR πŸ”—︎

~ [istio-cni-demo-1290] ❯ cat <<EOF | kubectl -n istio-system apply -f -
apiVersion: istio.banzaicloud.io/v1beta1
kind: Istio
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: istio-sample
spec:
  mtls: true
  autoInjectionNamespaces:
  - "default"
  sidecarInjector:
    initCNIConfiguration:
      enabled: true
      excludeNamespaces:
      - "istio-system"
  imagePullPolicy: Always
EOF

Deploy Istio’s example Bookinfo application πŸ”—︎

~ [istio-cni-demo-1290] ❯ kubectl -n default apply -f https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
deployment.extensions/details-v1 created
service/ratings created
deployment.extensions/ratings-v1 created
service/reviews created
deployment.extensions/reviews-v1 created
deployment.extensions/reviews-v2 created
deployment.extensions/reviews-v3 created
service/productpage created
deployment.extensions/productpage-v1 created

~ [istio-cni-demo-1290] ❯ kubectl -n default apply -f https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

Check that the running pods don’t have any init containers πŸ”—︎

~ [istio-cni-demo-1290] ❯ kubectl get pods '-o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name,INITCONTAINERS:.spec.initContainers[*].name'
NAME                              CONTAINERS                INITCONTAINERS
details-v1-bc557b7fc-2xc4t        details,istio-proxy       <none>
productpage-v1-6597cb5df9-g66vl   productpage,istio-proxy   <none>
ratings-v1-5c46fc6f85-mmhhr       ratings,istio-proxy       <none>
reviews-v1-69dcdb544-gmw2c        reviews,istio-proxy       <none>
reviews-v2-65fbdc9f88-cjwdx       reviews,istio-proxy       <none>
reviews-v3-bd8855bdd-b2fqw        reviews,istio-proxy       <none>

Determine the external hostname of the ingress gateway and open productpage in a browser πŸ”—︎

~ [istio-cni-demo-1290] ❯ INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
~ [istio-cni-demo-1290] ❯ open http://$INGRESS_HOST/productpage

Takeaway πŸ”—︎

The Istio CNI plugin replaces the istio-init container, which provides the same functionality, but without requiring Istio users to enable elevated privileges. It performs traffic redirection in the Kubernetes pod lifecycle’s network setup phase, thereby removing the NET_ADMIN capability requirement for users deploying pods into the Istio mesh.

The Istio operator - contributing and development πŸ”—︎

Our Istio operator is still under heavy development. If you’re interested in this project, we’re happy to accept contributions and/or fix any issues/feature requests you might have. You can support its development by starring the repo, or if you would like to help with development, read our contribution and development guidelines from the first blog post on our Istio operator.

About Banzai Cloud Pipeline πŸ”—︎

Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures β€” multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on β€” are default features of the Pipeline platform.