Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 124 additions & 0 deletions doc/09-object-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -1241,6 +1241,130 @@ for an example.
TLS for the HTTP proxy can be enabled with `enable_tls`. In addition to that
you can specify the certificates with the `ca_path`, `cert_path` and `cert_key` attributes.

### ElasticsearchDatastreamWriter <a id="objecttype-elasticsearchdatastreamwriter"></a>

Writes check result metrics and performance data to an Elasticsearch timeseries datastream.
This configuration object is available as the [elasticsearch datastream feature](14-features.md#elasticsearchdatastream-writer).


Example:

```
object ElasticsearchDatastreamWriter "datastreamwriter" {
host = "127.0.0.1"
port = 9200
datastream_namespace = "production"

enable_send_perfdata = true

host_tags_template = ["icinga-production"]
filter = {{ "datastream" in host.groups }}

flush_threshold = 1024
flush_interval = 10
}
```

Configuration Attributes:

Name | Type | Description
--------------------------|-----------------------|----------------------------------
host | String | **Required.** Elasticsearch host address. Defaults to `127.0.0.1`.
port | Number | **Required.** Elasticsearch port. Defaults to `9200`.
enable\_tls | Boolean | **Optional.** Whether to use a TLS stream. Defaults to `false`.
insecure\_noverify | Boolean | **Optional.** Disable TLS peer verification.
ca\_path | String | **Optional.** Path to CA certificate to validate the remote host. Requires `enable_tls` set to `true`.
enable\_ha | Boolean | **Optional.** Enable the high availability functionality. Only valid in a [cluster setup](06-distributed-monitoring.md#distributed-monitoring-high-availability-features). Defaults to `false`.
flush\_interval | Duration | **Optional.** How long to buffer data points before transferring to Elasticsearch. Defaults to `10s`.
flush\_threshold | Number | **Optional.** How many data points to buffer before forcing a transfer to Elasticsearch. Defaults to `1024`.

Auth:

Name | Type | Description
--------------------------|-----------------------|----------------------------------
username | String | **Optional.** Basic auth username for Elasticsearch
password | String | **Optional.** Basic auth password for Elasticsearch
api_token | String | **Optional.** Authorization token for Elasticsearch
cert\_path | String | **Optional.** Path to host certificate to present to the remote host for mutual verification. Requires `enable_tls` set to `true`.
key\_path | String | **Optional.** Path to host key to accompany the cert\_path. Requires `enable_tls` set to `true`.

Changing the behavior of the writer:

Name | Type | Description
--------------------------|-----------------------|----------------------------------
datastream_namespace | String | **Required.** Suffix for the datastream names. Defaults to `default`.
manage\_index\_template | Boolean | **Optional.** Whether to create and manage the index template in Elasticsearch. This requires the user to have `manage_index_templates` permission in Elasticsearch. Defaults to `true`.
enable\_send\_perfdata | Boolean | **Optional.** Send parsed performance data metrics for check results. Defaults to `false`.
enable\_send\_thresholds | Boolean | **Optional.** Whether to send warn, crit, min & max performance data.
host\_tags\_template | Array | **Optional.** Allows add [tags](https://www.elastic.co/docs/reference/ecs/ecs-base#field-tags) to the document for a Host check result.
service\_tags\_template | Array | **Optional.** Allows add [tags](https://www.elastic.co/docs/reference/ecs/ecs-base#field-tags) to the document for a Service check result.
host\_labels\_template | Dictionary | **Optional.** Allows add [labels](https://www.elastic.co/docs/reference/ecs/ecs-base#field-labels) to the document for a Host check result.
service\_labels\_template | Dictionary | **Optional.** Allows add [labels](https://www.elastic.co/docs/reference/ecs/ecs-base#field-labels) to the document for a Service check result.
filter | Function | **Optional.** An expression to filter which check results should be sent to Elasticsearch. Defaults to sending all check results.

#### Macro Usage (Tags, Labels & Namespace)

Macros can be used inside the following template attributes:

- host_tags_template (array of strings)
- service_tags_template (array of strings)
- host_labels_template (dictionary of key -> string value)
- service_labels_template (dictionary of key -> string value)
- datastream_namespace (string)

Behavior:
- Tags: Each array element may contain zero or more macros. If at least one macro is missing/unresolvable, the entire tag element is skipped and a debug log entry is written.
- Labels: Each dictionary value may contain macros. If at least one macro inside the value is missing, that label key/value pair is skipped and a debug log entry is written.
- Namespace: The datastream_namespace string may contain macros. If a macro is missing or resolves to an empty value, the writer falls back to the default namespace "default".
- Validation: A template string with an unterminated '$' (e.g. "$host.name") raises a configuration validation error referencing the original string.
- Macros never partially substitute: either all macros in the string resolve and the rendered value is used, or (for tags/labels) the entry is skipped.
- Normalization: Performance data metric labels and the resolved datastream namespace undergo normalization: any leading whitespace and leading special characters are trimmed; all remaining special (non-alphanumeric) characters are replaced with an underscore; consecutive underscores are collapsed; leading/trailing underscores are removed. This ensures stable, Elasticsearch-friendly field and namespace names.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normalization doesn't seem to work or maybe I'm doing something wrong? I just tested with commit b732f4723.

check_command = "example_FOOBAR_bar-foo"

I still see:

[2025-11-17 14:16:10 +0000] warning/ElasticsearchDatastreamWriter: 
Error during document creation: illegal_argument_exception: data_stream 
[metrics-icinga2.example_FOOBAR_bar-foo-default] must be lowercase

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tested 309088156, looks good now:

yellow open .ds-metrics-icinga2.i_like_icinga-default-2025.11.18-000001          ZmMtvAD0RJaX2LBGHG3ktA 1 1   2 0  36.2kb  36.2kb  36.2kb
yellow open .ds-metrics-icinga2.example_foobar_bar_foo-default-2025.11.18-000001 7F9DZE7VRSa0yT8kJZ0Jtg 1 1   2 0    227b    227b    227b


Examples:

```
object ElasticsearchDatastreamWriter "example-datastream" {
datastream_namespace = "$host.vars.env$" // Falls back to "default" if $host.vars.env$ is missing

host_tags_template = [
"env-$host.vars.env$",
"$host.name$"
]

service_tags_template = [
"svc-$service.name$",
"$service.display_name$"
]

host_labels_template = {
os = "$host.vars.os$"
fqdn = "$host.name$"
}

service_labels_template = {
check_cmd = "$service.check_command$"
attempted_env = "$host.vars.missing_env$" // Skipped if missing_env not set
}

filter = {{ service && "production" in host.groups }}
}
```

A missing macro example for a host check result:
- service_tags_template element "svc-$service.name$" is skipped (service not in scope).
- service_labels_template value "$service.check_command$" is skipped for host check results.

#### Filter Expression

The filter accepts an expression (function literal) and only the variables host and service are available. (service is null / undefined for host check results.)

Examples:
```
filter = {{ "production" in host.groups }}
filter = {{ service && "linux" in host.groups }}
```
If the filter returns true, the check result is sent; otherwise it is skipped.

### ExternalCommandListener <a id="objecttype-externalcommandlistener"></a>

Implements the Icinga 1.x command pipe which can be used to send commands to Icinga.
Expand Down
124 changes: 124 additions & 0 deletions doc/14-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -439,6 +439,130 @@ The recommended way of running Elasticsearch in this scenario is a dedicated ser
where you either have the Elasticsearch HTTP API, or a TLS secured HTTP proxy,
or Logstash for additional filtering.


#### Elasticsearch Datastream Writer <a id="elasticsearch-datastream-writer"></a>

> **Note**
>
> This is a newer alternative to the Elasticsearch Writer above. The Elasticsearch Datastream Writer uses
> Elasticsearch's data stream feature and follows the Elastic Common Schema (ECS), providing better performance
> and data organization. Use this writer for new installations. The original Elasticsearch Writer is still
> available for backward compatibility.
>
> OpenSearch: The data stream mode and ECS component template usage differ slightly in OpenSearch. The
> ElasticsearchDatastreamWriter focuses on Elasticsearch compatibility first. OpenSearch can ingest the data,
> but you may need to adapt the installed index/component templates manually (e.g. remove time_series mode if
> unsupported, adjust mappings). The option `manage_index_template` will not work with OpenSearch.


This feature sends check results with performance data to an [Elasticsearch](https://www.elastic.co/products/elasticsearch) instance or cluster.

> **Note**
>
> This feature requires Elasticsearch to support time series data streams (Elasticsearch 8.x+), and to have the ECS
> component template installed. It was tested successfully with Elasticsearch 8.12 and 9.0.8.


Enable the feature and restart Icinga 2.

```bash
icinga2 feature enable elasticsearchdatastream
```

The default configuration expects an Elasticsearch instance running on `localhost` on port `9200`
and writes to datastreams with the pattern `metrics-icinga2.<check>-<namespace>`.

More configuration details can be found [here](09-object-types.md#objecttype-elasticsearchdatastreamwriter).

#### Current Elasticsearch Schema <a id="elasticsearch-datastream-writer-schema"></a>

The documents for the ElasticsearchDatastreamWriter try to follow the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current/index.html)
version `8.0` as close as possible, with some additional changes to fit the Icinga 2 data model.
All documents are written to a data stream of the format `metrics-icinga.<check>-<datastream_namespace>`,
where `<check>` is the name of the checkcommand being executed to keep the number of fields per index low
and documents with the same performance data grouped together. `<datastream_namespace>` is an optional
configuration parameter to further separate documents, e.g. by environment like `production` or `development`.
The `datastream_namespace` can also be used to separate documents e.g. by hostgroups or zones, by using the
`filter` function to filter the check results and use several writers with different namespaces.
Time‑series dimensions are applied to `host.name` and (when present) `service.name`, aligning with ECS host and service
definitions: [ECS host fields](https://www.elastic.co/guide/en/ecs/current/ecs-host.html),
[ECS service fields](https://www.elastic.co/guide/en/ecs/current/ecs-service.html).

Icinga 2 automatically adds the following threshold metrics
if existing:

```
perfdata.<perfdata-label>.min
perfdata.<perfdata-label>.max
perfdata.<perfdata-label>.warn
perfdata.<perfdata-label>.crit
```

#### Adding additional tags and labels <a id="elasticsearch-datastream-writer-custom-tags-labels"></a>

Additionally it is possible to configure custom tags and labels that are applied to the metrics via
`host_tags_template`/`service_tags_template` and `host_labels_template`/`service_labels_template`
respectively. Depending on whether the write event was triggered on a service or host object,
additional tags are added to the ElasticSearch entries.

A host metrics entry configured with the following `host_tags_template`:

```
host_tags_template = ["production", "$host.groups"]
host_labels_template = {
os = "$host.vars.os$"
}
```

Will in addition to the above mentioned lines also contain:

```
"tags": ["production", "linux-servers;group-A"],
"labels": { "os": "Linux" }
```

#### Filtering check results <a id="elasticsearch-datastream-writer-filtering"></a>

You can filter which check results are sent to Elasticsearch by using the `filter` parameter.
It takes a function (expression) evaluated for every check result and must return a boolean.
If the function returns `true`, the check result is sent; otherwise it is skipped.

Only the variables `host` and `service` are available inside this expression.
For host check results `service` is not set (null/undefined). No other variables (such as
the raw check result object) are exposed.

Example configuration that only sends service check results for hosts in the `linux-server` hostgroup:


```
object ElasticsearchDatastreamWriter "elasticsearchdatastream" {
...
datastream_namespace = "production"
filter = {{ service && "linux-server" in host.groups }}
}
```

#### Elasticsearch Datastream Writer in Cluster HA Zones <a id="elasticsearch-datastream-writer-cluster-ha"></a>

The Elasticsearch Datastream Writer feature supports [high availability](06-distributed-monitoring.md#distributed-monitoring-high-availability-features)
in cluster zones.

By default, all endpoints in a zone will activate the feature and start
writing events to the Elasticsearch HTTP API. In HA enabled scenarios,
it is possible to set `enable_ha = true` in all feature configuration
files. This allows each endpoint to calculate the feature authority,
and only one endpoint actively writes events, the other endpoints
pause the feature.

When the cluster connection breaks at some point, the remaining endpoint(s)
in that zone will automatically resume the feature. This built-in failover
mechanism ensures that events are written even if the cluster fails.

The recommended way of running Elasticsearch in this scenario is a dedicated server
where you either have the Elasticsearch HTTP API, or a TLS secured HTTP proxy,
or Logstash for additional filtering.


### Graylog Integration <a id="graylog-integration"></a>

#### GELF Writer <a id="gelfwriter"></a>
Expand Down
82 changes: 82 additions & 0 deletions etc/icinga2/features-available/elasticsearchdatastream.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
/*
* The ElasticsearchDatastreamWriter feature writes Icinga 2 events to an Elasticsearch datastream.
* This feature requires Elasticsearch 8.12 or later.
*/

object ElasticsearchDatastreamWriter "elasticsearch" {
host = "127.0.0.1"
port = 9200

/* To enable a https connection, set enable_tls to true. */
// enable_tls = false

/* The datastream namespace to use. This can be used to separate different
* Icinga instances or let multiple Writers write to different
* datastreams in the same Elasticsearch cluster by using the filter option.
* The Elasticsearch datastream name will be
* "metrics-icinga2.{check}-{datastream_namespace}".
*/
// datastream_namespace = "default"

/* You can authorize icinga2 through three different methods.
* 1. Basic authentication with username and password.
* 2. Bearer token authentication with api_token.
* 3. Client certificate authentication with cert_path and key_path.
*/
// username = "icinga2"
// password = "changeme"

// api_token = ""

// cert_path = "/path/to/cert.pem"
// key_path = "/path/to/key.pem"
// ca_path = "/path/to/ca.pem"

/* Enable sending the threshold values as additional fields
* with the service check metrics. If set to true, it will
* send warn and crit for every performance data item.
*/
// enable_send_thresholds = false

/* The flush settings control how often data is sent to Elasticsearch.
* You can either flush based on a time interval or the number of
* events in the buffer. Whichever comes first will trigger a flush.
*/
// flush_threshold = 1024
// flush_interval = 10s

/* By default, all endpoints in a zone will activate the feature and start
* writing events to the Elasticsearch HTTP API. In HA enabled scenarios,
* it is possible to set `enable_ha = true` in all feature configuration
* files. This allows each endpoint to calculate the feature authority,
* and only one endpoint actively writes events, the other endpoints
* pause the feature.
*/
// enable_ha = false

/* By default, the feature will create an index template in Elasticsearch
* for the datastreams. If you want to manage the index template yourself,
* set manage_index_template to false.
*/
// manage_index_template = true

/* Additional tags and labels can be added to the host and service
* documents by using the host_tags_template, service_tags_template,
* host_labels_template and service_labels_template options.
* The tags and labels are static and will be added to every document.
*/
// host_tags_template = [ "icinga", "$host.vars.os$" ]
// service_tags_template = [ "icinga", "$service.vars.id$" ]
// host_labels_template = { "env" = "production", "os" = "$host.vars.os$" }
// service_labels_template = { "env" = "production", "id" = "$host.vars.id$" }

/* The filter option can be used to filter which events are sent to
* Elasticsearch. The filter is a regular Icinga 2 filter expression.
* The filter is applied to both host and service events.
* If the filter evaluates to true, the event is sent to Elasticsearch.
* If the filter is not set, all events are sent to Elasticsearch.
* You can use any attribute of the host, service, checkable or
* checkresult (cr) objects in the filter expression.
*/
// filter = {{ host.name == "myhost" || service.name == "myservice" }}
}
11 changes: 11 additions & 0 deletions lib/perfdata/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ mkclass_target(influxdbcommonwriter.ti influxdbcommonwriter-ti.cpp influxdbcommo
mkclass_target(influxdbwriter.ti influxdbwriter-ti.cpp influxdbwriter-ti.hpp)
mkclass_target(influxdb2writer.ti influxdb2writer-ti.cpp influxdb2writer-ti.hpp)
mkclass_target(elasticsearchwriter.ti elasticsearchwriter-ti.cpp elasticsearchwriter-ti.hpp)
mkclass_target(elasticsearchdatastreamwriter.ti elasticsearchdatastreamwriter-ti.cpp elasticsearchdatastreamwriter-ti.hpp)
mkclass_target(opentsdbwriter.ti opentsdbwriter-ti.cpp opentsdbwriter-ti.hpp)
mkclass_target(perfdatawriter.ti perfdatawriter-ti.cpp perfdatawriter-ti.hpp)

Expand All @@ -18,6 +19,7 @@ set(perfdata_SOURCES
influxdb2writer.cpp influxdb2writer.hpp influxdb2writer-ti.hpp
opentsdbwriter.cpp opentsdbwriter.hpp opentsdbwriter-ti.hpp
perfdatawriter.cpp perfdatawriter.hpp perfdatawriter-ti.hpp
elasticsearchdatastreamwriter.cpp elasticsearchdatastreamwriter.hpp elasticsearchdatastreamwriter-ti.hpp
)

if(ICINGA2_UNITY_BUILD)
Expand Down Expand Up @@ -58,6 +60,15 @@ install_if_not_exists(
${ICINGA2_CONFIGDIR}/features-available
)

install_if_not_exists(
${PROJECT_SOURCE_DIR}/usr/elasticsearch/index-template.json
${ICINGA2_PKGDATADIR}/elasticsearch
)
install_if_not_exists(
${PROJECT_SOURCE_DIR}/etc/icinga2/features-available/elasticsearchdatastream.conf
${ICINGA2_CONFIGDIR}/features-available
)

install_if_not_exists(
${PROJECT_SOURCE_DIR}/etc/icinga2/features-available/opentsdb.conf
${ICINGA2_CONFIGDIR}/features-available
Expand Down
Loading
Loading