-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Bug Report
Describe the bug
Fluent-Bit (4.0.1-debug) is deployed in Daemonset mode on our Openshift cluster, 6 node in total, resource allocation is as below:
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
We found the memory consumption of each pod kept in a high level, some of them almost hit the limit and even triggerred the pod restart:

Below is a 24h period memory usage, metric name container_memory_working_set_bytes
:

The daily throughput of tail
is around 7.2GB.
There is another cluster with one more tenant than the last one (throughput is 10.5GB/day) but with 512MB memory limitation, here is the past 7 days' memory usage, you can tell the "purple" one hit the limit and got rebooted:

To Reproduce
Well, this might be hard to reproduce but you can refer to the config as below to check if there is any improper section.
Expected behavior
Memory consumption should not be that high.
Your Environment
- Version used: 4.0.1-debug
- Configuration: Refer to the screenshot below
- Environment name and version (e.g. Kubernetes? What version?):Openshift 1.27.16
- Filters and plugins: Refer to the screenshot below
Additional context
As we enabled multi-tenancy in Loki side, the configuration in Fluent-Bit is also separated from each other and combined via includes
:

And in fluent-bit.yaml we include all the tenants configuraiton files together:
For each tenant, the configuration is as below:

