The tailer-webhook
is a different approach for the same problem: parsing legacy application’s log file. As an alternative to using a host file tailer service, you can use a file tailer webhook
service.
While the containers of the host file tailers
run in a separated pod, file tailer webhook
uses a different approach: if a pod has a specific annotation, the webhook injects a sidecar container for every tailed file into the pod.
The tailer-webhook
behaves differently compared to the host-tailer
:
Pros:
- A simple annotation on the pod initiates the file tailing.
- There is no need to use
mounted volumes
, Logging operator will manage the volumes and mounts between your containers.
Cons:
- Required to start the Logging operator with webhooks service enabled. This requires additional configuration, especially on certificates since webhook services are allowed over TLS only.
- Possibly uses more resources, since every tailed file attaches a new sidecar container to the pod.
Enable webhooks in Logging operator 🔗︎
We recommend using
cert-manager
to manage your certificates. Since usingcert-manager
is not part of this article, we assume you already have valid certs.
You will require the following things:
- a valid client certificate,
- a CA certificate, and
- a custom value.yaml file for your helm chart.
The following example refers to a Kubernetes secret named webhook-tls
which is a self-signed certificate generated by cert-manager
.
Add the following lines to your custom values.yaml
or create a new file if needed:
env:
- name: ENABLE_WEBHOOKS
value: "true"
volumes:
- name: webhook-tls
secret:
secretName: webhook-tls
volumeMounts:
- name: webhook-tls
mountPath: /tmp/k8s-webhook-server/serving-certs
This will:
- Set
ENABLE_WEBHOOKS
environment variable totrue
. This is the official way to enable webhooks in Logging operator. - Create a volume from the
webhook-tls
Kubernetes secret. - Mount the
webhook-tls
secret volume to the/tmp/k8s-webhook-server/serving-certs
path where Logging operator will search for it.
Now you are ready to install Logging operator with the new custom values:
helm upgrade --install --wait --create-namespace --namespace logging -f operator_values.yaml logging-operator ./charts/logging-operator
Alternatively, instead of using the values.yaml file, you can run the installation from command line also by passing the values with the set
and set-string
parameters:
helm upgrade --install --wait --create-namespace --namespace logging --set "env[0].name=ENABLE_WEBHOOKS" --set-string "env[0].value=true" --set "volumes[0].name=webhook-tls" --set "volumes[0].secret.secretName=webhook-tls" --set "volumeMounts[0].name=webhook-tls" --set "volumeMounts[0].mountPath=/tmp/k8s-webhook-server/serving-certs" logging-operator ./charts/logging-operator
You also need a service which points to the webhook port (9443) of Logging operator, and where the mutatingwebhookconfiuration
will point to. Running the following command in shell will create the required service:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: logging-webhooks
namespace: logging
spec:
ports:
- name: logging-webhooks
port: 443
targetPort: 9443
protocol: TCP
selector:
app.kubernetes.io/instance: logging-operator
type: ClusterIP
EOF
Furthermore, you need to tell Kubernetes to send admission requests to our webhook service. To do that, create a mutatingwebhookconfiguration
Kubernetes resource, and:
- Set the configuration to call
/tailer-webhook
path on yourlogging-webhooks
service whenv1.Pod
is created. - Set
failurePolicy
toignore
, which means that the original pod will be created on webhook errors. - Set
sideEffects
tonone
, because we won’t cause any out-of-band changes in Kubernetes.
Unfortunately, mutatingwebhookconfiguration
requires the caBundle
field to be filled because we used a self-signed certificate, and the certificate cannot be validated through the system trust roots. If your certificate was generated with a system trust root CA, remove the caBundle
line, because the certificate will be validated automatically.
There are more sophisticated ways to load the CA into this field, but this solution requires no further components.
For example: you can inject the CA with a simple cert-manager
cert-manager.io/inject-ca-from: logging/webhook-tls
annotation on themutatingwebhookconfiguration
resource.
kubectl apply -f - <<EOF
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: sample-webhook-cfg
namespace: logging
labels:
app: sample-webhook
webhooks:
- name: sample-webhook.banzaicloud.com
clientConfig:
service:
name: logging-webhooks
namespace: logging
path: "/tailer-webhook"
caBundle: $(kubectl get secret webhook-tls -n logging -o json | jq -r '.data["ca.crt"]')
rules:
- operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
scope: "*"
failurePolicy: Ignore
sideEffects: None
admissionReviewVersions: [v1]
EOF
Triggering the webhook 🔗︎
CAUTION:
To use the webhook, you must first enable webhooks in the Logging operator.File tailer webhook
is based on a Mutating Admission Webhook
. It is called every time when a pod starts.
To trigger the webhook, add the following annotation to the pod metadata:
-
Annotation key:
sidecar.logging-extensions.banzaicloud.io/tail
-
Value of the annotation: the filename (including path, and optionally the container) you want to tail, for example:
annotations: {"sidecar.logging-extensions.banzaicloud.io/tail": "/var/log/date"}
-
To tail multiple files, add only one annotation, and separate the filenames with commas, for example:
... metadata: name: test-pod annotations: {"sidecar.logging-extensions.banzaicloud.io/tail": "/var/log/date,/var/log/mycustomfile"} spec: ...
-
If the pod contains multiple containers, see Multi-container pods.
Note: If the pod with the sidecar annotation is in the
default
namespace, Logging operator handlestailer-webhook
annotations clusterwide. To restrict the webhook callbacks to the current namespace, change thescope
of themutatingwebhookconfiguration
tonamespaced
.
File tailer example 🔗︎
The following example creates a pod that is running a shell in infinite loop that appends the date
command’s output to a file every second. The annotation sidecar.logging-extensions.banzaicloud.io/tail
notifies Logging operator to attach a sidecar container to the pod. The sidecar tails the /legacy-logs/date.log
file and sends its output to the stdout.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
annotations: {"sidecar.logging-extensions.banzaicloud.io/tail": "/var/log/date"}
spec:
containers:
- image: debian
name: sample-container
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /var/log/date;
sleep 1;
done
- image: debian
name: sample-container2
...
After you have created the pod with the required annotation, make sure that the test-pod
contains two containers by running kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
test-pod 2/2 Running 0 29m
Check the container names in the pod to see that the Logging operator has created the sidecar container called legacy-logs-date-log
. The sidecar containers’ name is always built from the path and name of the tailed file. Run the following command:
kubectl get pod test-pod -o json | jq '.spec.containers | map(.name)'
Expected output:
[
"test",
"legacy-logs-date-log"
]
Check the logs of the test
container. Since it writes the logs into a file, it does not produce any logs on stdout.
kubectl logs test-pod test; echo $?
Expected output:
0
Check the logs of the legacy-logs-date-log
container. This container exposes the logs of the test
container on its stdout.
kubectl logs test-pod legacy-logs-date-log
Expected output:
Fluent Bit v1.9.5
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2022/09/15 11:26:11] [ info] [fluent bit] version=1.9.5, commit=9ec43447b6, pid=1
[2022/09/15 11:26:11] [ info] [storage] version=1.2.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
[2022/09/15 11:26:11] [ info] [cmetrics] version=0.3.4
[2022/09/15 11:26:11] [ info] [sp] stream processor started
[2022/09/15 11:26:11] [ info] [input:tail:tail.0] inotify_fs_add(): inode=938627 watch_fd=1 name=/legacy-logs/date.log
[2022/09/15 11:26:11] [ info] [output:file:file.0] worker #0 started
Thu Sep 15 11:26:11 UTC 2022
Thu Sep 15 11:26:12 UTC 2022
...
Multi-container pods 🔗︎
In some cases you have multiple containers in your pod and you want to distinguish which file annotation belongs to which container. You can order every file annotations to particular container by prefixing the annotation with a ${ContainerName}:
container key. For example:
...
metadata:
name: test-pod
annotations: {"sidecar.logging-extensions.banzaicloud.io/tail": "sample-container:/var/log/date,sample-container2:/var/log/anotherfile,/var/log/mycustomfile,foobarbaz:/foo/bar/baz"}
spec:
...
CAUTION:
- Annotations without containername prefix: the file gets tailed on the default container (container 0)
- Annotations with invalid containername: file tailer annotation gets discarded
Annotation | Explanation |
---|---|
sample-container:/var/log/date | tails file /var/log/date in sample-container |
sample-container2:/var/log/anotherfile | tails file /var/log/anotherfile in sample-container2 |
/var/log/mycustomfile | tails file /var/log/mycustomfile in default container (sample-container) |
foobarbaz:/foo/bar/baz | will be discarded due to non-existing container name |