Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 9 additions & 4 deletions processor/ratelimitprocessor/gubernator.go
Original file line number Diff line number Diff line change
Expand Up @@ -229,20 +229,25 @@ func (r *gubernatorRateLimiter) RateLimit(ctx context.Context, hits int) error {
)...)),
)
rate, burst = cfg.Rate, cfg.Burst
} else if rate > cfg.Rate { // Dynamic escalation occurred
} else if rate > cfg.Rate {
r.telemetryBuilder.RatelimitDynamicEscalations.Add(ctx, 1,
metric.WithAttributeSet(attribute.NewSet(append(attrs,
attribute.String("reason", "success"),
attribute.String("reason", "escalation"),
)...)),
)
} else { // Dynamic escalation was skipped (dynamic <= static)
} else if rate < cfg.Rate {
r.telemetryBuilder.RatelimitDynamicEscalations.Add(ctx, 1,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[For reviewers] (CC: @marclop) I am not sure about this metric with the latest changes. I was thinking to base this off of the value of the multiplier but I am not sure if that will be helpful either. I have updated the current metric based on if it is higher or lower than the configured static rate but it won’t catch if it is under degradation and starting to decrease from the static rate. We can probably do this by comparing against the previous rate rather than the static rate. Let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe an alternative to this metric would be to have a gauge where the final rate is published with the right dimensions (metadata keys, class, etc).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that sounds like a good idea. A gauge will give direct insight into the rate, do you think we should keep this one too or replace this entirely?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm more inclined to remove this one and add a gauge than keep it, but if you think it adds good insight we can keep it as well. But seems redundant if we add the gauge, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ready to review again once you make the changes discussed here

metric.WithAttributeSet(attribute.NewSet(append(attrs,
attribute.String("reason", "deescalation"),
)...)),
)
} else {
r.telemetryBuilder.RatelimitDynamicEscalations.Add(ctx, 1,
metric.WithAttributeSet(attribute.NewSet(append(attrs,
attribute.String("reason", "skipped"),
)...)),
)
}

}
// Execute rate actual limit check / recording.
return r.executeRateLimit(ctx, cfg, uniqueKey, hits, rate, burst, now)
Expand Down
2 changes: 1 addition & 1 deletion processor/ratelimitprocessor/gubernator_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -914,7 +914,7 @@ func TestGubernatorRateLimiter_TelemetryCounters(t *testing.T) {
Attributes: attribute.NewSet(
attribute.String("class", "alpha"),
attribute.String("source_kind", "class"),
attribute.String("reason", "success"),
attribute.String("reason", "escalation"),
),
},
}, metricdatatest.IgnoreTimestamp())
Expand Down