By default, the Cluster logging service collects the log messages of the applications running in your cluster.

  • To collect metrics about the logging service, make sure that the Cluster monitoring service is enabled, and select Settings > Collect metrics from logging service.
  • To transfer your log messages in encrypted channels, select Settings > Secure logging channels using TLS. That way your logs between the cluster nodes and Fluentd (that handles log routing and forwarding) are transferred in encrypted and authenticated connections.
  • To collect host logs (for example, kubelet and audit logs) and Kubernetes events, you need to use the logging extensions of the Banzai Cloud One Eye observability system. For details on buying One Eye, contact us at sales@banzaicloud.com.

Click ACTIVATE to install the Logging operator, Fluentd, and Fluent Bit on your cluster. You can customize your logging flows using the Logging operator, for example, forward your cluster logs to an external service.

Fetch logs with Loki 🔗︎

  1. Make sure that you have enabled Cluster monitoring > Grafana.
  2. Enable Loki.
  3. If you want to access Loki from outside the cluster:
    1. Select Enable Ingress.
    2. Select a Secret to use for authentication. If you don't specify a secret, Pipeline automatically generates one. After activating the Cluster logging integrated service, you can find the password in the left sidebar of this page.
  4. By default, you can access Loki under the IP address of the cluster (or a domain name, depending on your cloud provider, and whether you have enabled the DNS integrated service). To access Loki under a specific domain name, enter it into the Specify the ingress host field, and make sure that the domain name is associated to the IP address of the cluster, or the ingress matches every host.
  5. Click ACTIVATE or SAVE ALL CHANGES to apply your changes. After the service is activated, login information is displayed on the left side of this page.

You can access Loki from the command line with LogCLI.

Configure log storage 🔗︎

You can store all the logs of your cluster centrally in a storage object. This will be the default ClusterOutput the Logging operator.

  1. Select the provider that will store your logs at Default ClusterOutput:

    • Alibaba Object Storage
    • Amazon S3
    • Azure Blob Storage
    • Google Cloud Storage
  2. Select the secret to use in the Secret field, or create a new one. The secret must have appropriate privileges to write a storage object, or to create a new one.

  3. Select the storage object to use in the Storage field, or create a new one. When creating a new storage object, provide the Name and Region of the new storage object. For Azure Blob Storage, also provide the name of the Storage account to use.

You can customize your logging flows using the Logging operator, for example, forward your cluster logs to an external service.