Skip to content

Reduce DirectFileStore memory overhead, particularly while aggregating data when getting scraped #185

@berniechiu

Description

@berniechiu

As we mention in our README, using DirectFileStore has a measurable impact on the production app's memory usage.

This doesn't seem to be a memory leak, it doesn't grow unbounded over time, but it is a problem for some of our users.

We think this memory usage is particularly high when getting scraped, at which point the library has to read all the files from all the processes, and load all the data in RAM to aggregate it. There may be more efficient ways to do this. As an example, we've found this improvement in the past.

We'd like to reduce the memory overhead of using DirectFileStore as much as possible, so this is a sort of call for PRs.

Original issue text below, for the conversation thread below to make sense:


Hi Prometheus team,

We've bumped into issues like this

Screenshot 2020-04-14 17 55 37

Is there any possibility to dump the file store properly?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions