Re: Why could different data in a table be processed with different performance?

2018-10-10 Thread Vladimir Ryabtsev
FYI, posting an intermediate update on the issue.

I disabled index scans to keep existing order, and copied part of the
"slow" range into another table (3M rows in 2.2 GB table + 17 GB toast). I
was able to reproduce slow readings from this copy. Then I performed
CLUSTER of the copy using PK and everything improved significantly. Overall
time became 6 times faster with disk read speed (reported by iotop)
30-60MB/s.

I think we can take bad physical data distribution as the main hypothesis
of the issue. I was not able to launch seekwatcher though (it does not work
out of the box in Ubuntu and I failed to rebuild it) and confirm lots of
seeks.

I still don't have enough disk space to solve the problem with original
table, I am waiting for this from admin/devops team.

My plan is to partition the original table and CLUSTER every partition on
primary key once I have space.

Best regards,
Vlad


does work_mem is used on temp tables?

2018-10-10 Thread Mariel Cherkassky
Hi,
Does the work mem is used when I do sorts or hash operations on temp
tables  ? Or the temp_buffer that is allocated at the beginning of the
session is used for it ?

At one of our apps, the app create a temp table and run on it some
operations (joins,sum,count,avg ) and so on.. I saw in the postgresql.conf
that fs temp files are generated so I guested that the memory buffer that
was allocated for that session was too small. The question is should I
increase the work_mem or the temp_buffers ?

Thanks , Mariel.