Ok I reproduced the OOM, with only 200 partitions as opposed to 2K that I
mentioned before. The keys to reproduce it I believe are:
1. Write millions of rows to *all* partitions
2. Run ANALYSE so that the planner knows about the sizes
Here are the correct steps now. Let me know if you manage to
On Thu, 18 Jul 2024, David Rowley wrote:
On Fri, 12 Jul 2024 at 02:08, Dimitrios Apostolou wrote:
* The memory is unnecessarily allocated early on, before any partitions
are actually aggregated. I know this because I/O is slow on this device
and the table sizes are huge, it's simply not
On Fri, 12 Jul 2024 at 02:08, Dimitrios Apostolou wrote:
> I can't help but see this as a bug. I see many issues:
>
> * postgres is not reading from partitions in parallel, but one after the
>other. It shouldn't need all this memory simultaneously.
I don't know for Hash Aggregate, but for nod
On Thursday, July 11, 2024, Dimitrios Apostolou wrote:I
wonder how the postgres development community is
>
> tracking all these issues, I've even started forgetting the ones I have
> found, and I'm sure I have previously reported (on this list) a couple of
> should-be-easy issues that would be ide
On Thu, 11 Jul 2024, Tom Lane wrote:
Dimitrios Apostolou writes:
The TABLE test_runs_raw has 1000 partitions on RANGE(workitem_n).
So don't do that. Adding partitions is not cost-free.
I understand that, they also add an administrative cost that I'd rather
avoid. But I ended up adding al
Dimitrios Apostolou writes:
> The TABLE test_runs_raw has 1000 partitions on RANGE(workitem_n).
So don't do that. Adding partitions is not cost-free.
regards, tom lane
Thank you for the feedback.
So I've managed to reduce the query to a rather simple one:
SELECT
workitem_n, test_executable_n,
bool_or(test_resulttype_n IN (2,3))
FROM
test_runs_raw
GROUP BY
workitem_n, test_executable_n
LIMIT 10;
The TABLE test_runs_raw has 1000 partitions on R
On Thu, 11 Jul 2024 at 13:19, Dimitrios Apostolou wrote:
> I have a table with 1000 partitions on PostgreSQL 16.
> I notice that a fairly complicated query of the form:
>
> SELECT ... GROUP BY ... LIMIT ...
>
> causes the postgres backend process to grow insanely very fast, and the
> kernel OOM ki
Hello list,
I have a table with 1000 partitions on PostgreSQL 16.
I notice that a fairly complicated query of the form:
SELECT ... GROUP BY ... LIMIT ...
causes the postgres backend process to grow insanely very fast, and the
kernel OOM killer to kill it rather soon.
It seems it tries to alloca