MetricsHub
MetricsHub Enterprise 1.0.00
-
Home
- Configuration
Sending Telemetry to Observability Platforms
Like any application instrumented with OpenTelemetry, MetricsHub utilizes the OTLP protocol to transmit data. Although MetricsHub Community can directly send metrics to observability platforms that support OpenTelemetry natively, it is usually recommended in production environments to use an OpenTelemetry Collector to:
- Aggregate metrics across different sources.
- Serve as a proxy, particularly in firewall-secured areas.
- Manage error handling, including retries.
Bundled with OpenTelemetry Collector Contrib, MetricsHub Enterprise facilitates connections to over 30 different observability platforms.
Configure the OTel Collector (Enterprise Edition)
As a regular OpenTelemetry Collector, MetricsHub Enterprise consists of:
- receivers
- processors
- exporters
- and several extensions.
This version of MetricsHub Enterprise leverages version 0.102.0 of OpenTelemetry.
To configure the OpenTelemetry Collector of MetricsHub Enterprise, edit the otel/otel-config.yaml
file.
Important: We recommend using an editor supporting the Schemastore[1] to edit MetricsHub's configuration YAML files (Example: Visual Studio Code[2] and vscode.dev[3], with RedHat's YAML extension[4]).
Receivers
OTLP gRPC
Warning: Only update this section if you customized the MetricsHub Agent settings[5]. The MetricsHub Agent pushes the collected data to the
OTLP Receiver
[6] via gRPC[7] on port TCP/4317.
The OTLP Receiver
is configured by default with the self-signed certificate security/otel.crt
and the private key security/otel.key
to enable the TLS protocol. If you wish to set your own certificate file, configure the MetricsHub Agent with the correct Trusted Certificates File[8]. Because the OTLP Exporter
of the MetricsHub Agent performs hostname verification, you will also have to add the localhost
entry (DNS:localhost,IP:127.0.0.1
) to the Subject Alternative Name (SAN)
extension of the new generated certificate.
Clients requests are authenticated with the Basic Authenticator extension[9].
otlp:
protocols:
grpc:
endpoint: localhost:4317
tls:
cert_file: ../security/otel.crt
key_file: ../security/otel.key
auth:
authenticator: basicauth
OpenTelemetry Collector Internal Exporter for Prometheus
The OpenTelemetry Collector's internal Exporter for Prometheus is an optional source of data. It provides information about the collector activity. It's referred to as prometheus/internal
in the pipeline and leverages the standard prometheus
receiver[10].
prometheus/internal:
config:
scrape_configs:
- job_name: otel-collector-internal
scrape_interval: 60s
static_configs:
- targets: [ localhost:8888 ]
Under the service:telemetry:metrics
section, you can set the metrics level
or the address
of the OpenTelemetry Collector Internal Exporter (by default: localhost:8888).
service:
telemetry:
metrics:
address: localhost:8888
level: basic
Processors
By default, the collected metrics go through 5 processors:
memory_limiter
[11] to limit the memory consumed by the OpenTelemetry Collector process (configurable)filter
[12] to include or exclude metricsbatch
[13] to process data in batches of 10 seconds (configurable).resourcedetection
[14] to find out the actual host name of the system monitoredmetricstransform
[15] to enrich the collected metrics, typically with labels required by the observability platforms. Themetricstransform
processor has many options to add, rename, delete labels and metrics[15].
Exporters
The exporters
section defines the destination of the collected metrics. MetricsHub Enterprise version 1.0.00 includes support for all the OpenTelemetry Collector Contrib exporters[16], such as:
- Prometheus Remote Write Exporter[17]
- Datadog Exporter[18]
- Logging Exporter[19]
- New Relic (OTLP exporter)[20]
- Prometheus Exporter[21]
- Splunk SignalFx[22]
- and many more…[16]
You can configure several exporters in the same instance of the OpenTelemetry Collector to send the collected metrics to multiple platforms.
Extensions
HealthCheck
The healthcheck[23] extension checks the status of MetricsHub Enterprise . It is activated by default and runs on port 13133 (http://localhost:13133
[24]).
Refer to Check the collector is up and running[25] for more details.
zpages
The zpages extension provides debug information about all the different components. It notably provides:
- general information about MetricsHub
- details about the active pipeline
- activity details of each receiver and exporter configured in the pipeline.
Refer to Check the pipelines status[26] for more details.
Basic Authenticator
The Basic Authenticator
[9] extension authenticates the OTLP Exporter
requests by comparing the Authorization header sent by the OTLP Exporter
and the credentials provided in the security/.htpasswd
file. Refer to the Apache htpasswd[27] documentation to know how to manage user files for basic authentication.
basicauth:
htpasswd:
file: ../security/.htpasswd
The .htpasswd
file is stored in the security
directory.
Warning: If a different password is specified in the
.htpasswd
file, update the Basic Authentication Header[28] of the MetricsHub Agent.
The Pipeline
Configured extensions, receivers, processors and exporters are taken into account if and only if they are declared in the pipeline:
service:
telemetry:
logs:
level: info # Change to debug for more details
metrics:
address: localhost:8888
level: basic
extensions: [health_check, basicauth]
pipelines:
metrics:
receivers: [otlp, prometheus/internal]
processors: [memory_limiter, batch, resourcedetection, metricstransform]
exporters: [prometheusremotewrite/your-server] # List here the platform of your choice
# Uncomment the section below to enable logging of hardware alerts.
# logs:
# receivers: [otlp]
# processors: [memory_limiter, batch, resourcedetection]
# exporters: [logging] # List here the platform of your choice
Configure the OTLP Exporter (Community Edition)
By default, the MetricsHub Agent pushes the collected metrics to the OTLP Receiver
[6] through gRPC on port TCP/4317. To push data to the OTLP receiver of your choice:
- locate the
otel
section in your configuration file - configure the
otel.exporter.otlp.metrics.endpoint
andotel.exporter.otlp.logs.endpoint
parameters as follows:
otel:
otel.exporter.otlp.metrics.endpoint: https://<my-host>:4317
otel.exporter.otlp.logs.endpoint: https://<my-host>:4317
resourceGroups: #...
where <my-host>
should be replaced with the hostname or IP address of the server where the OTLP receiver is installed.
Use the below syntax if you wish to push metrics to the Prometheus OTLP Receiver:
otel:
otel.metrics.exporter: otlp
otel.exporter.otlp.metrics.endpoint: http://<prom-server-host>:9090/api/v1/otlp/v1/metrics
otel.exporter.otlp.metrics.protocol: http/protobuf
where <prom-server-host>
should be replaced with the hostname or IP address of the server where Prometheus is running.
Note: For specific configuration details, refer to the OpenTelemetry Auto-Configure documentation[29]. This resource provides information about the properties to be configured depending on your deployment requirements.
Trusted certificates file
If an OTLP Receiver
certificate is required, configure the otel.exporter.otlp.metrics.certificate
and otel.exporter.otlp.logs.certificate
parameters under the otel
section:
otel:
otel.exporter.otlp.metrics.certificate: /opt/metricshub/security/new-server-cert.crt
otel.exporter.otlp.logs.certificate: /opt/metricshub/security/new-server-cert.crt
resourceGroups: # ...
The file should contain one or more X.509 certificates in PEM format.