Banzai Cloud is now part of Cisco

Banzai Cloud Logo Close
Home Products Benefits Blog Company Contact

The content of this page hasn't been updated for years and might refer to discontinued products and projects.

Kubeless was designed to be a Kubernetes-native serverless framework, and, for PubSub functions, uses Apache Kafka behind the scenes. At Banzai Cloud we like cloud-native technologies, however, we weren’t happy about having to operate a Zookeeper cluster on Kubernetes, so we modified and open-sourced a version for Kafka in which we replaced Zookeeper with etcd, which was (and still is) a better fit. This post is part of our serverless series, which discusses deploying Kubeless, using Kafka on etcd with Pipeline, and deploying a so called PubSub function.

Create a Kubernetes Cluster 🔗︎

If you need help creating a Kubernetes cluster check out Pipeline, a RESTful API built for creating/managing Kubernetes clusters on different cloud providers. Currently, it supports Google’s GKE, Azure’s AKS and Amazon’s AWS with the addition of more providers coming soon. In order to deploy Pipeline please follow these instructions. Once Pipeline is up and running, Kubernetes clusters and deployments can be created using this Postman collection.

Deploy Kubeless to a Kubernetes cluster 🔗︎

Kubeless can be deployed into a Kubernetes cluster with a simple RESTful API call, using Pipeline. Look for the Deployment Create API call in the same Postman collection. To invoke the Deployment Create API, we need to provide two parameters:

  • cluster id - this is the identifier of the desired Kubernetes cluster from the list of Kubernetes clusters Pipeline manages. (To see the list of Kubernetes clusters managed by the Pipeline invoke the Cluster List REST API call).

  • REST call body:

        { "name": "banzaicloud-stable/kubeless" }
    

Deploy a PubSub Function to Kubeless using the kubeless command line tool 🔗︎

Functions can be deployed with a fully featured command line tool, simply called Kubeless. Download the kubeless CLI from the release page. OSX users can also use brew: brew installs kubeless. Let’s take a look at an example of how this tool can be used.

Kubeless functions have three different types:

  • http triggered
  • pubsub triggered
  • schedule triggered

We are going to use a python example provided by Kubeless. This function simply returns everything that is pushed into it.

def handler(context):
    return context

This will be a pubsub triggered function. To deploy it using the Kubeless command line tool, save this code snippet to a file named pubsub.py, and run the following:

kubeless function deploy pubsub-python --trigger-topic s3-python --runtime python2.7 --handler pubsub.handler --from-file pubsub.py --env KUBELESS_KAFKA_SVC=bootstrap --env KUBELESS_KAFKA_NAMESPACE=default
INFO[0000] Deploying function...
INFO[0000] Function pubsub-python submitted for deployment
INFO[0000] Check the deployment status executing 'kubeless function ls pubsub-python'

kubectl get pods
NAME                                                   READY     STATUS    RESTARTS   AGE
etcd-cluster-0000                                      1/1       Running   0          5m
etcd-cluster-0001                                      1/1       Running   0          5m
etcd-cluster-0002                                      1/1       Running   0          4m
gauche-lionfish-etcd-operator-8579c69b9c-ss224         1/1       Running   0          5m
gauche-lionfish-kubeless-controller-5d9c56b576-2w6kf   1/1       Running   0          5m
kafka-0                                                1/1       Running   0          5m
kafka-1                                                1/1       Running   0          4m
kafka-2                                                1/1       Running   0          4m
kafka-topic-creator                                    1/1       Running   0          5m
pubsub-python-74795df756-ldns4                         1/1       Running   0          3s

As you can see, the pubsub-python function was successfully submitted. Behind the scenes, Kubeless uses Kafka with Zookeeper. Pubsub-type functions will each subscribe to a specific topic: in our case, s3-python. When a new message is published to that specific Kafka topic, the function is triggered. We replaced Zookeeper with etcd inside Kafka, and modified the Kubeless Helm chart to use Kafka on etcd. To invoke the function, we need to create a Kafka topic and publish a message to it. To do that, run:

The reason why the Kubeless command line tool cannot be used here (kubeless topic create test-topic) is that it will attempt to use Zookeeper instead of etcd.

kubectl exec -it kafka-topic-creator bash
# Inside the container run:

./bin/kafka-topics.sh --zookeeper etcd://etcd-cluster-client:2379 --create --topic s3-python --partitions 1 --replication-factor 3
Created topic "s3-python".

./bin/kafka-topics.sh --zookeeper etcd://etcd-cluster-client:2379 --list
__consumer_offsets
s3-python

./bin/kafka-console-producer.sh --broker-list bootstrap:9092 --topic s3-python
>Welcome
>Kubeless
>on
>BanzaiCloud

Now let’s check and see if the registered function worked, by running:

kubectl logs -f pubsub-python-74795df756-ldns4
Welcome
Kubeless
on
BanzaiCloud

Once Kubeless is deployed and using Kafka on etcd, we can use our advanced Apache Kakfa monitoring to better glean insights into how, exactly, our PubSub functions are performing.