Bug Report
Versions
- Driver: 1.0.5
- Database: PostgreSQL 13.12
- Java: 17
- OS: MacOS, Linux
Current Behavior
When you have query zipped in parallel with some other failed function and that query return more than 256 rows it can leads to the case when you no have real consumer, because chain was cancelled, but you receive data from database that start to save it to ReactorNettyClinet.buffer.
When this happens, any other attempts to get data from the database will fail because ReactorNettyClient.BackendMessageSubscriber.tryDrainLoop never call drainLoop because stucked conversation no have demands
private void tryDrainLoop() {
while (hasBufferedItems() && hasDownstreamDemand()) {
if (!drainLoop()) {
return;
}
}
}
Can reproduce using https://github.com/agorbachenko/r2dbc-connection-leak-demo
If you increase System property "reactor.bufferSize.small" to 350, the attached example will start working