
Integrating OpenTelemetry with NestJS and Grafana Tempo
Step-by-step guide to integrate OpenTelemetry (OTel) with a NestJS app, run Grafana and Tempo with Docker, and visualize traces.
Integrating OpenTelemetry with NestJS and Grafana Tempo
Modern distributed applications can become complex very quickly. Tracking down performance bottlenecks or debugging request flows across microservices is challenging without proper observability. That’s where OpenTelemetry (OTel) comes in. Combined with Grafana Tempo and Grafana dashboards, you get a powerful stack to collect, store, and visualize distributed traces.
In this tutorial, we’ll walk through setting up a NestJS application with OTel, running an OTel Collector, Tempo, and Grafana via Docker Compose, and finally visualizing our traces.
🚀 What We're Building
By the end of this guide, you will have:
- A NestJS API automatically instrumented with OpenTelemetry.
- A Docker Compose stack running Grafana, Tempo, and the OTel Collector.
- A fully functioning dashboard to view request traces.
Why this stack?
- OpenTelemetry: Vendor-neutral standard for telemetry data.
- Grafana Tempo: A high-volume, minimal-dependency distributed tracing backend.
- Grafana: The extensive visualization platform everyone loves.
Step 1: Set Up the NestJS Application
First, scaffold a new project (or use your existing one).
npm i -g @nestjs/cli
nest new otel-nestjs-demo
cd otel-nestjs-demo
Install the required OpenTelemetry packages. We'll use the Node.js SDK and auto-instrumentations to automatically capture HTTP, Express/NestJS, and other standard library events.
npm install @opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-grpc \
@opentelemetry/resources \
@opentelemetry/semantic-conventions \
@opentelemetry/instrumentation-nestjs-core \
@opentelemetry/instrumentation-pino
Step 2: Implement Tracing (instrumentation.ts)
Create a file src/instrumentation.ts. This file will initialize the OTel SDK before the NestJS app starts.
Pro Tip: We use environment variables for configuration. This makes it easy to switch between local development and Docker environments.
// src/instrumentation.ts
import { NodeSDK } from '@opentelemetry/sdk-node';
import { Resource } from '@opentelemetry/resources';
import {
ATTR_SERVICE_NAME,
ATTR_SERVICE_VERSION,
} from '@opentelemetry/semantic-conventions';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';
import { NestInstrumentation } from '@opentelemetry/instrumentation-nestjs-core';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
// Configure the exporter (sends data to OTel Collector)
const traceExporter = new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4317',
});
const sdk = new NodeSDK({
resource: new Resource({
[ATTR_SERVICE_NAME]: 'nestjs-otel-demo',
[ATTR_SERVICE_VERSION]: '1.0.0',
}),
traceExporter,
instrumentations: [
getNodeAutoInstrumentations(),
new NestInstrumentation(), // Specific instrumentation for NestJS lifecycle
],
});
sdk.start();
process.on('SIGTERM', () => {
sdk
.shutdown()
.then(() => console.log('Tracing terminated'))
.catch((error) => console.log('Error terminating tracing', error))
.finally(() => process.exit(0));
});
Now, import this file at the very top of your src/main.ts. It must run before any other imports!
// src/main.ts
import './instrumentation'; // <--- MUST BE FIRST
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(3000);
}
bootstrap();
Step 3: Infrastructure with Docker Compose
Running individual Docker containers is tedious. Let's use docker-compose to orchestrate Tempo, the OTel Collector, and Grafana entirely.
Create a docker-compose.yml in your project root:
version: '3.8'
services:
# 1. Grafana Tempo (Tracing Backend)
tempo:
image: grafana/tempo:latest
command: ['-config.file=/etc/tempo.yaml']
volumes:
- ./docker-config/tempo.yaml:/etc/tempo.yaml
ports:
- '3200:3200' # Tempo HTTP
- '4317' # OTLP gRPC
# 2. OpenTelemetry Collector (The middleman)
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ['--config=/etc/otel-collector.yaml']
volumes:
- ./docker-config/otel-collector.yaml:/etc/otel-collector.yaml
ports:
- '4317:4317' # OTLP gRPC receiver
- '4318:4318' # OTLP HTTP receiver
depends_on:
- tempo
# 3. Grafana (Visualization User Interface)
grafana:
image: grafana/grafana:latest
ports:
- '3001:3000'
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
depends_on:
- tempo
Configuration Files
Create a folder docker-config and add these two files:
1. docker-config/otel-collector.yaml
This tells the collector to receive traces via OTLP and export them to Tempo.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
exporters:
# Export to Tempo (running in the same docker network)
otlp:
endpoint: 'tempo:4317'
tls:
insecure: true
debug: # Useful for seeing traces in Docker logs
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp, debug]
2. docker-config/tempo.yaml
Minimal configuration for Tempo to store traces locally.
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
storage:
trace:
backend: local
local:
path: /tmp/tempo/traces
wal:
path: /tmp/tempo/wal
Step 4: Launch and Verify
-
Start the Infrastructure:
docker-compose up -d -
Start your NestJS App: Ensure your app points to the local collector.
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 npm run start:dev -
Generate Traffic: Hit your API endpoint a few times:
curl http://localhost:3000/ -
visualize in Grafana:
- Open http://localhost:3001
- Go to Connections > Data Sources > Add Data Source
- Select Tempo
- URL:
http://tempo:3200(Use the docker service name) - Click Save & Test
- Go to Explore, select Tempo, and run a query!
🔧 Troubleshooting Guide
If things aren't working, don't panic. Distributed tracing involves several moving parts. Here is a checklist to resolve common issues.
1. "I don't see any traces in Grafana"
- Check the specific "Disconnect" point:
- App -> Collector: Look at your NestJS console. Do you see errors like
ServiceUnavailable? EnsureOTEL_EXPORTER_OTLP_ENDPOINTis correct (localhost:4317for local app,otel-collector:4317for docker app). - Collector -> Tempo: Check logs:
docker logs otel-collector. If you see "connection refused", ensure the collector config usesendpoint: "tempo:4317".
- App -> Collector: Look at your NestJS console. Do you see errors like
- Protocol Mismatch: The default Node.js OTLP Exporter uses gRPC. If you accidentally point it to an HTTP port (like 4318), it will fail silently or hang. Ensure you are using port 4317.
2. "Connection Refused" Errors
If your NestJS app throws errors trying to connect:
- Ensure the
otel-collectorcontainer is actually running:docker ps. - If running NestJS locally (outside Docker), you must map ports in docker-compose (
4317:4317). Code inside Docker needs to use the hostnameotel-collector, while code on your machine useslocalhost.
3. Debugging with the Collector
We added a debug exporter in the collector config. This is your best friend.
Run:
docker logs -f otel-collector
If your app is successfully sending traces, you will see raw JSON span data scrolling in these logs.
- No logs? The app isn't sending data (check App config).
- Yes logs, but no Grafana? The issue is between Collector and Tempo (check Collector config).
Conclusion
You have effectively built a robust observability pipeline. By decoupling the app (NestJS) from the backend (Tempo) using the OTel Collector, you gain the flexibility to change storage backends or add sampling rules later without touching your application code.