f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo

Published at 2025-12-14T20:00:00+02:00

This is a follow-up to Part 8 of the f3s series, where I covered Prometheus, Grafana, Loki, and Alloy. Now it's time for the last pillar of observability: distributed tracing with Grafana Tempo.

Part 8: Observability (Prometheus, Grafana, Loki, Alloy)

For a preview of what distributed tracing with Tempo looks like in Grafana, check out the X-RAG blog post:

X-RAG Observability Hackathon

Table of Contents

Why Distributed Tracing?

In a microservices setup, a single user request can hop through multiple services. Tracing gives you:

Without it, you're basically guessing where time gets spent.

Deploying Grafana Tempo

Tempo runs in monolithic mode — all components in one process, same pattern as Loki's SingleBinary deployment. Keeps things simple for a home lab.

The setup:

Tempo Helm Values

Persistent Volumes

Grafana Datasource Provisioning

All Grafana datasources (Prometheus, Alertmanager, Loki, Tempo) are provisioned via a single ConfigMap mounted directly to the Grafana pod. No sidecar discovery needed.

In `grafana-datasources-all.yaml`:

The Tempo datasource config links traces to Loki logs and Prometheus metrics — so you can jump between signals directly in Grafana.

The kube-prometheus-stack Helm values disable sidecar-based discovery and mount this ConfigMap directly to `/etc/grafana/provisioning/datasources/`.

Installation

Verify it's running:

Configuring Alloy for Trace Collection

I updated the Alloy values to add OTLP receivers for traces alongside the existing log collection.

Added to the Alloy config:

Upgrade Alloy:

Demo Tracing Application

To actually see traces, I built a three-tier Python app. Nothing fancy — just enough to generate real distributed traces.

Architecture

OpenTelemetry Instrumentation

All three services use Python OpenTelemetry libraries:

Dependencies:

Auto-instrumentation pattern (same across all services, just change the service name):

The auto-instrumentation creates spans for HTTP requests, propagates trace context via W3C headers, and links parent/child spans across services automatically.

Deployment

The demo app has a Helm chart in the conf repo. Build, import the container images, and install:

Verify:

Access at:

http://tracing-demo.f3s.foo.zone

Visualizing Traces in Grafana

Searching for Traces

In Grafana, go to Explore, select the Tempo datasource, and you can search by trace ID, service name, or tags.

Some useful TraceQL queries:

Find all traces from the demo app:

Find slow requests (>200ms):

Find traces from a specific service:

Find errors:

Frontend traces with server errors:

Service Graph

The service graph view shows visual connections between services — Frontend to Middleware to Backend — with request rates and latencies. It's generated automatically from trace data using Prometheus metrics.

Practical Example: End-to-End Trace

Here's what it looks like to generate and examine a trace.

Generate a trace:

Response (HTTP 200):

After a few seconds (batch export delay), search for traces via Tempo API:

Returns something like:

The full trace has 8 spans across 3 services:

In Grafana, paste the trace ID in the Tempo search box or use TraceQL:

The waterfall view shows the complete request flow with timing:

Distributed trace in Grafana Tempo: Frontend -> Middleware -> Backend

More Tempo trace screenshots in the X-RAG blog post:

X-RAG Observability Hackathon

Correlation Between Signals

This is where the observability stack really comes together. Tempo integrates with Loki and Prometheus so you can jump between traces, logs, and metrics.

Traces to logs: click on any span and select "Logs for this span." Loki filters by time range, service name, namespace, and pod. Super useful for figuring out what a service was doing during a specific request.

Traces to metrics: from a trace view, the "Metrics" tab shows Prometheus data like request rate, error rate, and duration percentiles for the services involved.

Logs to traces: in Loki, logs containing trace IDs are automatically linked. Click the trace ID and you jump straight to the full trace in Tempo.

Storage and Retention

With 10Gi storage and 7-day retention, the system handles moderate trace volumes. Check usage:

If storage fills up, you can reduce retention to 72h, add sampling in Alloy, or increase the PV size.

Configuration Files

All config files are on Codeberg:

Tempo configuration

Alloy configuration (updated for traces)

Demo tracing application

Other *BSD-related posts:

2026-04-02 f3s: Kubernetes with FreeBSD - Part 9: GitOps with ArgoCD

2025-12-14 f3s: Kubernetes with FreeBSD - Part 8b: Distributed Tracing with Tempo (You are currently reading this)

2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments

2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage

2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network

2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs

2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts

2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation

2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage

2024-04-01 KISS high-availability with OpenBSD

2024-01-13 One reason why I love OpenBSD

2022-10-30 Installing DTail on OpenBSD

2022-07-30 Let's Encrypt with OpenBSD and Rex

2016-04-09 Jails and ZFS with Puppet on FreeBSD

E-Mail your comments to `paul@nospam.buetow.org`

Back to the main site