Skip to content

High memory consumption of Fluent-Bit pod #10898

@duj4

Description

@duj4

Bug Report

Describe the bug
Fluent-Bit (4.0.1-debug) is deployed in Daemonset mode on our Openshift cluster, 6 node in total, resource allocation is as below:

resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

We found the memory consumption of each pod kept in a high level, some of them almost hit the limit and even triggerred the pod restart:
Image

Image

Below is a 24h period memory usage, metric name container_memory_working_set_bytes:

Image

The daily throughput of tail is around 7.2GB.

There is another cluster with one more tenant than the last one (throughput is 10.5GB/day) but with 512MB memory limitation, here is the past 7 days' memory usage, you can tell the "purple" one hit the limit and got rebooted:

Image

To Reproduce
Well, this might be hard to reproduce but you can refer to the config as below to check if there is any improper section.

Expected behavior
Memory consumption should not be that high.

Your Environment

  • Version used: 4.0.1-debug
  • Configuration: Refer to the screenshot below
  • Environment name and version (e.g. Kubernetes? What version?):Openshift 1.27.16
  • Filters and plugins: Refer to the screenshot below

Additional context
As we enabled multi-tenancy in Loki side, the configuration in Fluent-Bit is also separated from each other and combined via includes:

Image

And in fluent-bit.yaml we include all the tenants configuraiton files together:
Image

For each tenant, the configuration is as below:

Image Image

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions