Skip to content

[spark] wrong metric type for memory- and disk-related metrics #21241

@asidoruk

Description

@asidoruk

On the basis of this metrics definition (https://github.com/DataDog/integrations-core/blob/master/spark/metadata.csv) and corresponding documentation (https://docs.datadoghq.com/integrations/spark/?tab=host#metrics) - all memory- and disk-related metrics have type "count". I use metrics spark.executor.disk_used and spark.executor.memory_used, and my need is to see current memory / disk pressure. But since these metrics are treated as counters - I see cumulative allocations (usage). On a widget it looks like constantly growing values (due to "counter" nature), while these metrics represent current state/level that can go up and down. So I believe the correct metric type for these metrics is "gauge", not "count".

Image

I took a look at other integrations, and I see that "memory" metrics are defined as "gauge" everywhere (except spark integration).

To my mind it's needed to carefully review metric type for Spark metrics. Or, if "count" type is set intentionally especially for Spark metrics - explanation would be appreciated.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions