OTLP (OpenTelemetry Protocol)
Exploring the OTLP Data Format (OpenTelemetry Protocol) #
Introduction to OTLP #
The OpenTelemetry Protocol (OTLP) is the native and standardized protocol for transmitting telemetry data within the OpenTelemetry ecosystem. It defines how traces, metrics, logs, and profiles (experimental) are serialized, transported, and delivered between instrumented applications and observability backends.
Supported Data Types (Signals) #
OTLP works with 4 main types of telemetry signals:
1. Traces (Stable) #
- Endpoint:
/v1/traces - Main Components:
Span: Unit of work in a distributed traceSpanEvent: Point-in-time event during span executionSpanLink: Connections between related spans
- Use Cases: Distributed tracing, latency analysis, bottleneck detection
2. Metrics (Stable) #
- Endpoint:
/v1/metrics - Main Components:
Metric: Aggregated measurement (Gauge, Sum, Histogram, ExponentialHistogram, Summary)DataPoint: Individual value at a specific timestamp
- Use Cases: Performance monitoring, alerting, dashboards
3. Logs (Stable) #
- Endpoint:
/v1/logs - Main Components:
LogRecord: Structured log entry
- Use Cases: Debugging, auditing, correlation with traces
4. Profiles (Development) #
- Endpoint:
/v1development/profiles - Main Components:
Profile: CPU and memory profiling data
- Status: Experimental (OTLP 1.9.0)
- Use Cases: Code-level performance analysis
Transport Protocols #
OTLP supports two protocols:
| Protocol | Default Port | Encoding | Recommended Use |
|---|---|---|---|
| gRPC | 4317 | Binary Protobuf | High performance, internal communication |
| HTTP | 4318 | Binary Protobuf / JSON | Wide compatibility, web applications |
Exploring the OTLP Schema #
Prerequisites #
# Install required tools
brew install bufbuild/buf/buf
brew install protobuf
Cloning the Official Repository #
git clone https://github.com/open-telemetry/opentelemetry-proto.git
cd opentelemetry-proto
Viewing Protobuf Definitions #
# Inspect individual schemas
cat opentelemetry/proto/trace/v1/trace.proto
cat opentelemetry/proto/metrics/v1/metrics.proto
cat opentelemetry/proto/logs/v1/logs.proto
cat opentelemetry/proto/profiles/v1development/profiles.proto
Generating Complete Schema #
Option 1: JSON Format (readable) #
# Generate complete schema in JSON
buf build -o -#format=json | jq '.' > otlp-schema.json
# Extract only specific data types
buf build -o -#format=json | \
jq '.file[] | select(.name | contains("trace")) | .messageType' > traces-schema.json
Option 2: File Descriptor Set (binary) #
# Generate binary descriptor with all dependencies
protoc \
--descriptor_set_out=otlp-complete.desc \
--include_imports \
opentelemetry/proto/trace/v1/trace.proto \
opentelemetry/proto/metrics/v1/metrics.proto \
opentelemetry/proto/logs/v1/logs.proto \
opentelemetry/proto/profiles/v1development/profiles.proto
# Inspect content (raw format)
protoc --decode_raw < otlp-complete.desc | head -n 50
Validating Structure with Buf #
# Lint protobuf definitions
buf lint
# Check for breaking changes (comparing with previous version)
buf breaking --against '.git#branch=main'
# Generate HTML documentation
buf build -o -#format=json | \
jq -r '.file[].messageType[] | "## \(.name)\n\(.field[])"' > docs.md
Hierarchical Data Structure #
All OTLP signals follow this hierarchy:
ResourceSpans/ResourceMetrics/ResourceLogs/ResourceProfiles
└── Resource (entity attributes: service.name, host, etc)
└── ScopeSpans/ScopeMetrics/ScopeLogs/ScopeProfiles
└── InstrumentationScope (library that generated the data)
└── Spans / Metrics / LogRecords / Profiles
Practical Example: Analyzing a Trace #
# Export example trace to JSON
buf build -o -#format=json | \
jq '.file[] | select(.name == "opentelemetry/proto/trace/v1/trace.proto") |
.messageType[] | select(.name == "Span")' > span-structure.json
# Main fields of a Span
cat span-structure.json | jq '.field[] | {name, type, number}'
OTLP Data Types Reference #
Complete JSON Schema #
{
"otlp_data_types": {
"traces": {
"status": "stable",
"endpoint": "/v1/traces",
"service": "ExportTraceServiceRequest",
"main_types": {
"span": {
"description": "Represents a unit of work or operation in a distributed trace",
"proto": "opentelemetry/proto/trace/v1/trace.proto"
},
"span_event": {
"description": "Point-in-time event that occurs during span execution",
"proto": "opentelemetry/proto/trace/v1/trace.proto"
}
}
},
"metrics": {
"status": "stable",
"endpoint": "/v1/metrics",
"service": "ExportMetricsServiceRequest",
"main_types": {
"metric": {
"description": "Numerical measurement aggregated over time",
"proto": "opentelemetry/proto/metrics/v1/metrics.proto",
"metric_types": [
"Gauge",
"Sum",
"Histogram",
"ExponentialHistogram",
"Summary"
]
},
"datapoint": {
"description": "Individual value of a metric at a specific point in time",
"proto": "opentelemetry/proto/metrics/v1/metrics.proto"
}
}
},
"logs": {
"status": "stable",
"endpoint": "/v1/logs",
"service": "ExportLogsServiceRequest",
"main_types": {
"log_record": {
"description": "Individual structured log record",
"proto": "opentelemetry/proto/logs/v1/logs.proto"
}
}
},
"profiles": {
"status": "development",
"endpoint": "/v1development/profiles",
"service": "ExportProfilesServiceRequest",
"main_types": {
"profile": {
"description": "CPU and memory profiling data (experimental)",
"proto": "opentelemetry/proto/profiles/v1development/profiles.proto",
"note": "Signal still in active development, subject to changes"
}
}
}
},
"transport_protocols": {
"grpc": {
"default_port": 4317,
"encoding": "binary protobuf"
},
"http": {
"default_port": 4318,
"encoding": ["binary protobuf", "JSON protobuf"]
}
},
"common_structures": {
"resource": {
"description": "Attributes that identify the entity producing telemetry (e.g., service.name, host.name)"
},
"instrumentation_scope": {
"description": "Library or module that generated the telemetry"
},
"attributes": {
"description": "Key-value pairs describing the context",
"supported_types": [
"string",
"int64",
"double",
"bool",
"bytes",
"array",
"map (since OTLP 1.9.0)"
]
}
}
}
What’s New in OTLP 1.9.0 #
- Complex Attributes: Support for Maps and heterogeneous Arrays
- Profiles Signal: Experimental addition for continuous profiling
- ExponentialHistogram Improvements: Native conversion to Prometheus
- Enhanced Attribute Types: All signals now support complex data types (maps, heterogeneous arrays)
Common OTLP Use Cases #
1. Application Performance Monitoring (APM) #
# Configure OTLP exporter for traces
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_SERVICE_NAME="my-service"
2. Metrics Collection #
# Configure metrics export
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="http://localhost:4318/v1/metrics"
export OTEL_METRICS_EXPORTER="otlp"
3. Log Aggregation #
# Configure logs export
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="http://localhost:4318/v1/logs"
export OTEL_LOGS_EXPORTER="otlp"
OTLP Collector Configuration #
Basic Collector Setup #
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 1s
send_batch_size: 1024
exporters:
logging:
loglevel: debug
otlp:
endpoint: backend:4317
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp]
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp]
Debugging OTLP Data #
Inspecting Protobuf Messages #
# Decode a binary OTLP message
cat trace.bin | protoc --decode=opentelemetry.proto.trace.v1.TracesData \
opentelemetry/proto/trace/v1/trace.proto
# Validate JSON payload
cat trace.json | jq '.resourceSpans[0].scopeSpans[0].spans[0]'
Testing OTLP Endpoints #
# Test gRPC endpoint
grpcurl -plaintext -d @ localhost:4317 \
opentelemetry.proto.collector.trace.v1.TraceService/Export < trace.json
# Test HTTP endpoint
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d @trace.json
Additional Resources #
- Official Specification: https://opentelemetry.io/docs/specs/otlp/
- Proto Repository: https://github.com/open-telemetry/opentelemetry-proto
- Semantic Conventions: https://opentelemetry.io/docs/specs/semconv/
- OTLP Exporter Configuration: https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/
Local Testing Environment #
Using Docker Compose #
version: '3.8'
services:
otel-collector:
image: otel/opentelemetry-collector:latest
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
volumes:
- ./otel-config.yaml:/etc/otel-collector-config.yaml
command: ["--config=/etc/otel-collector-config.yaml"]
Using Kubernetes (Kind) #
# Deploy OTLP Collector to Kind cluster
kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-operator/main/bundle.yaml
# Create Collector instance
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otlp-collector
spec:
mode: deployment
config: |
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging]
EOF
Best Practices #
1. Choose the Right Transport #
- gRPC: Use for internal services, high throughput scenarios
- HTTP: Use for browser-based apps, firewall-friendly environments
2. Implement Batching #
# Configure batch processor
export OTEL_BSP_SCHEDULE_DELAY=5000 # 5 seconds
export OTEL_BSP_MAX_QUEUE_SIZE=2048
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=512
3. Use Compression #
# Enable gzip compression
export OTEL_EXPORTER_OTLP_COMPRESSION=gzip
4. Set Appropriate Timeouts #
# Configure timeout (in milliseconds)
export OTEL_EXPORTER_OTLP_TIMEOUT=10000 # 10 seconds
5. Secure Your Endpoints #
# Use TLS for production
export OTEL_EXPORTER_OTLP_ENDPOINT="https://collector.example.com:4317"
export OTEL_EXPORTER_OTLP_CERTIFICATE=/path/to/cert.pem
Signal Status (January 2025) #
- ✅ Traces, Metrics, Logs: Stable (production-ready)
- ⚠️ Profiles: Development (subject to changes)
Conclusion #
OTLP provides a vendor-neutral, efficient, and extensible protocol for telemetry data transmission. Understanding its data types and structure is essential for building robust observability pipelines. As the protocol continues to evolve with new signals like Profiles, it remains the foundation of modern observability practices.
Last Updated: January 2025
OTLP Version: 1.9.0
Protocol Specification: https://opentelemetry.io/docs/specs/otlp/