On Tue, Sep 08, 2020 at 10:30:50AM +0000, Dirk Krautschick wrote:
> Update: Better title and format corrections
>
> Hi %,
>
> in order to be able to readjust the effects of the stored procedure and, if
> necessary, to save turnaround times, different requests can be concatenated
> using semicolons for bundling several statements in one request. We did some
> tests against a postgres cluster.
>
> The results in terms of optimizations are as follows:
>
>
> Batchsize | clients| count Queries | average s/query| comment
> --------------|---------|----------------------|----------------------|-
> 1 | 1 | 15.86k | 2.24ms |
> 10 | 1 | 31.80k | 332us |
> 25 | 1 | 31.75k | 312us |
> 50 | 1 | 32.00k | 280us |
I guess you're looking at the minimum of 280us.
; 1/(280e-6) * 60
~214285.71428571428571428571
> the question remains why this is so.
You can't expect it to go a billion times faster just by putting a billion
queries in one request, and at 50 batches it looks like you've hit the next
performance bottleneck. Whether that's CPU / IO / network / locks / RAM /
planner / context switches / logging / ??? remains to be seen.
> Does anyone has experience with something similar or are there some
> hints about how to optimize the postgres cluster for such bundled statements?
I think at this step you want to optimize for what the statements are doing,
not for the statements themselves. Could you send a query plan for the stored
procedure ?
Also, you'd maybe want to think if there's a way you can avoid making 100s of
1000s of requests per second, rather than trying to optimize for it. Can you
make another stored procedure which handles N requests rather than calling this
SP N times ? There's no guarantee that won't hit the same or other bottleneck,
until you see what that is.
--
Justin