The gateway collector deployment pattern consists of applications (or other collectors) sending telemetry signals to a single OTLP endpoint provided by one or more collector instances running as a standalone service (for example, a deployment in Kubernetes), typically per cluster, per data center or per region.
In the general case you can use an out-of-the-box load balancer to distribute the load amongst the collectors:
For use cases where the processing of the telemetry data processing has to happen in a specific collector, you would use a two-tiered setup with a collector that has a pipeline configured with the Trace ID/Service-name aware load-balancing exporter in the first tier and the collectors handling the scale out in the second tier. For example, you will need to use the load-balancing exporter when using the Tail Sampling processor so that all spans for a given trace reach the same collector instance where the tail sampling policy is applied.
Let’s have a look at such a case where we are using the load-balancing exporter:
traces
type.For a concrete example of the centralized collector deployment pattern we first need to have a closer look at the load-balancing exporter. It has two main configuration fields:
resolver
, which determines where to find the downstream collectors (or:
backends). If you use the static
sub-key here, you will have to manually
enumerate the collector URLs. The other supported resolver is the DNS resolver
which will periodically check for updates and resolve IP addresses. For this
resolver type, the hostname
sub-key specifies the hostname to query in order
to obtain the list of IP addresses.routing_key
field you tell the load-balancing exporter to route
spans to specific downstream collectors. If you set this field to traceID
(default) then the Load-balancing exporter exports spans based on their
traceID
. Otherwise, if you use service
as the value for routing_key
, it
exports spans based on their service name which is useful when using
connectors like the Span Metrics connector, so all
spans of a service will be send to the same downstream collector for metric
collection, guaranteeting accurate aggregations.The first-tier collector servicing the OTLP endpoint would be configured as shown below:
receivers:
otlp:
protocols:
grpc:
exporters:
loadbalancing:
protocol:
otlp:
insecure: true
resolver:
static:
hostnames:
- collector-1.example.com:4317
- collector-2.example.com:5317
- collector-3.example.com
service:
pipelines:
traces:
receivers: [otlp]
exporters: [loadbalancing]
receivers:
otlp:
protocols:
grpc:
exporters:
loadbalancing:
protocol:
otlp:
insecure: true
resolver:
dns:
hostname: collectors.example.com
service:
pipelines:
traces:
receivers: [otlp]
exporters: [loadbalancing]
receivers:
otlp:
protocols:
grpc:
exporters:
loadbalancing:
routing_key: "service"
protocol:
otlp:
insecure: true
resolver:
dns:
hostname: collectors.example.com
port: 5317
service:
pipelines:
traces:
receivers: [otlp]
exporters: [loadbalancing]
The load-balancing exporter emits metrics including
otelcol_loadbalancer_num_backends
and otelcol_loadbalancer_backend_latency
that you can use for health and performance monitoring of the OTLP endpoint
collector.
Pros:
Cons: