-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
On the basis of this metrics definition (https://github.com/DataDog/integrations-core/blob/master/spark/metadata.csv) and corresponding documentation (https://docs.datadoghq.com/integrations/spark/?tab=host#metrics) - all memory- and disk-related metrics have type "count". I use metrics spark.executor.disk_used and spark.executor.memory_used, and my need is to see current memory / disk pressure. But since these metrics are treated as counters - I see cumulative allocations (usage). On a widget it looks like constantly growing values (due to "counter" nature), while these metrics represent current state/level that can go up and down. So I believe the correct metric type for these metrics is "gauge", not "count".
I took a look at other integrations, and I see that "memory" metrics are defined as "gauge" everywhere (except spark integration).
To my mind it's needed to carefully review metric type for Spark metrics. Or, if "count" type is set intentionally especially for Spark metrics - explanation would be appreciated.