The following procedure shows you how to collect metrics from a peer cluster to an existing One Eye deployment. The cluster running the One Eye deployment is called the observer cluster, the new cluster you are collecting metrics from is called the peer cluster. For details on how One Eye collects metrics from multiple clusters, see Multicluster metrics.
Prerequisites 🔗︎
- You have completed the prerequisites described in Prerequisites.
- You have another cluster that will be the peer cluster.
CAUTION:
If Prometheus is already installed on the peer cluster, make sure that properly sets the cluster label for the collected metrics: the label must be present, and unique, otherwise the metrics of the different clusters become mixed up.
If One Eye has already been installed on the peer cluster, make sure that the spec.clusterName
field of the observer custom resource is different on the observer and the peer clusters. One Eye version 0.5.0 and later tries to detect it automatically from the context.
Limitations 🔗︎
- The collected metrics are available on the observer cluster (for example, from Grafana), but they are not displayed on the One Eye UI.
- If you are using object storage to store the collected metrics, you must configure the compactor manually.
- Currently only metrics are collected from the peer clusters, logs are not.
Steps 🔗︎
Note: The following procedure requires you to switch between the Kubernetes context of the observer and the peer cluster. A convenient way to switch between contexts is to use the kubectx tool.
-
Deploy One Eye on the observer cluster. Run the one-eye install command, then follow the on-screen instructions.
Note: The interactive installer helps you configure a simple logging system, this is detailed in Deploy One Eye. To collect only metrics, you don’t have to configure logging on the cluster. In this case, just set the name of the cluster, then reply No to the subsequent questions.
-
Obtain the name of the peer cluster, for example, by getting it from the current context of the peer’s kubeconfig.
On the peer cluster, run the following command.
kubectx $PEER_CONTEXT export PEER_ENDPOINT=$(kubectl config current-context | cut -d '@' -f 2)
-
Switch to the observer cluster.
kubectx $OBSERVER_CONTEXT
-
If not already installed, install cert-manager and thanos-operator components on the observer cluster.
one-eye cert-manager install --update one-eye thanos install --operator-only --update
-
Create a secret that will be used to establish trust between the observer and the peer clusters. The following steps show you how to use the same certificate on both clusters.
-
Create a self-signed CA and a certificate.
cat <<EOF | kubectl apply -f- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned spec: selfSigned: {} EOF
-
Create a Kubernetes secret for the certificate with proper labels (monitoring.banzaicloud.io/thanospeer and monitoring.banzaicloud.io/thanospeer-ca). The labels are needed so the Thanos operator can find the secret automatically and attach it to the ThanosPeer custom resource.
cat <<EOF | kubectl apply -f- apiVersion: v1 kind: Secret metadata: name: ${PEER_ENDPOINT}-tls labels: monitoring.banzaicloud.io/thanospeer: ${PEER_ENDPOINT} monitoring.banzaicloud.io/thanospeer-ca: ${PEER_ENDPOINT} type: kubernetes.io/tls data: tls.crt: "" tls.key: "" ca.crt: "" EOF
-
Create the certificate that will be used on both clusters.
cat <<EOF | kubectl apply -f- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ${PEER_ENDPOINT}-tls spec: secretName: ${PEER_ENDPOINT}-tls commonName: peer-endpoint.cluster.notld dnsNames: - $PEER_ENDPOINT issuerRef: name: selfsigned usages: - server auth - client auth EOF
-
Save the secret so you can load it into the peer later (for example, into the file called ${PEER_ENDPOINT}-tls.yaml).
kubectl get secret ${PEER_ENDPOINT}-tls -o yaml > ${PEER_ENDPOINT}-tls.yaml
-
Switch to the peer cluster.
kubectx $PEER_CONTEXT
-
If not already installed, install Prometheus and the Thanos operator on the peer cluster.
CAUTION:
If Prometheus is already installed on the peer cluster, make sure that properly sets the cluster label for the collected metrics: the label must be present, and unique, otherwise the metrics of the different clusters become mixed up.
If One Eye has already been installed on the peer cluster, make sure that the
spec.clusterName
field of the observer custom resource is different on the observer and the peer clusters. One Eye version 0.5.0 and later tries to detect it automatically from the context.one-eye prometheus install --update one-eye thanos install --operator-only --update
-
Install the ingress controller.
one-eye ingress install --update
-
Apply the saved secret.
kubectl apply -f ${PEER_ENDPOINT}-tls.yaml
-
-
Connect the peer cluster to the observer cluster.
-
On the peer cluster, create the ThanosEndpoint custom resource.
one-eye thanos endpoint generate $PEER_ENDPOINT --cert-secret-name ${PEER_ENDPOINT}-tls --ca-bundle-secret-name ${PEER_ENDPOINT}-tls | kubectl apply -f-
-
Generate the ThanosPeer custom resource for the cluster and save it (for example, into the thanos-peer.yaml file).
one-eye thanos peer generate --name $PEER_ENDPOINT --wait-poll-interval 5 > thanos-peer.yaml
-
Switch to the observer cluster.
kubectx $OBSERVER_CONTEXT
-
Apply the generated ThanosPeer custom resource to the observer cluster.
kubectl apply -f thanos-peer.yaml
-
-
Verify that the metrics of the peer cluster are stored to the observer cluster. Run the following commands:
kubectl port-forward svc/${PEER_ENDPOINT}-peer-query 10902:10902 open localhost:10902/stores
Select Stores on the Thanos web interface: