Ahh, that's interesting. I think you still run into the issue where you have a case like this:
1. You get 10 pipelined requests from a single connection with a post body to update some record in a Postgres table.
2. All 10 requests are independent and can be resolved at the same time, so you should make use of Postgres pipelining and send them all as you receive them.
3. When finishing the requests, you likely need the information provided in the request object. Lets assume it's a lot of data in the body, to the point where you've reached you per connection buffer limit. You either allocate here to unblock the read, or you block new reads, impacting response latency, until all requests are completed. The allocation is the better choice at that point but that heuristic decision engine with the goal of peak performance is definitely nuanced, if not complicated.
Its a cool problem space though, so always interested in learning how others attack it.