Re: Query went slow all of sudden. ON V 11.3

2019-10-06 Thread Tomas Vondra

On Fri, Oct 04, 2019 at 07:28:54PM +0530, nikhil raj wrote:

Hi Justin,

Its been executing for 35 + mins due to statement time out its getting
canceled.



Well, without a query plan it's really hard to give you any advice. We
need to see at least EXPLAIN output (without analyze) to get an idea of
how the query will be executed. Even better, disable the statement
timeout in the session and dive use EXPLAIN ANALYZE. Of course, it's
unclear how long it'll run.

Earlier you mentioned the query started running fast after you recreated
one of the tables. That likely means the table (or the indexes on it)
are getting bloated over time. Try looking at the sizes of those objects
(and maybe use pgstattuple to get more detailed statistics before
rebuilding it next time.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: Slow PostgreSQL 10.6 query

2019-10-06 Thread Tomas Vondra

On Tue, Oct 01, 2019 at 11:42:33PM +1000, Behrang Saeedzadeh wrote:

Thanks. That eliminated the bottleneck!

Any ideas why adding ORDER BY to the subquery also changes the plan in a
way that eliminates the bottleneck?



IIRC the ORDER BY clause makes it impossible to "collapse" the subquery
into the main (upper) one, and it probably happens to constrict the
choices so that the planner ends up picking a good plan. I guess adding
"OFFSET 0" to the subquery would have the same effect.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: Delete huge Table under XFS

2019-10-06 Thread Tomas Vondra

On Thu, Sep 19, 2019 at 07:00:01PM +0200, Joao Junior wrote:

A table with 800 gb means 800 files of 1 gb. When I use truncate or drop
table,  xfs that is a log based filesystem,  will write lots of data in its
log and this is the problem. The problem is not postgres, it is the way
that xfs works with big files , or being more clear, the way that it
handles lots of files.



I'm a bit skeptical about this explanation. Yes, XFS has journalling,
but only for metadata - and I have a hard time believing deleting 800
files (or a small multiple of that) would write "lots of data" into the
jornal, and noticeable performance issues. I wonder how you concluded
this is actually the problem.

That being said, TRUNCATE is unlikely to perform better than DROP,
because it also deletes all the files at once. What you might try is
dropping the indexes one by one, and then the table. That should delete
files in smaller chunks.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: Out of Memory errors are frustrating as heck!

2019-10-06 Thread Tomas Vondra

On Sat, Aug 24, 2019 at 11:40:09AM -0400, Gunther wrote:
Thanks Tom, yes I'd say it's using a lot of memory, but wouldn't call 
it "leak" as it doesn't grow during the 30 min or so that this query 
runs. It explodes to 4GB and then stays flat until done.




Well, the memory context stats you've shared however show this:

total: 1345345736 bytes in 1209 blocks; 4529600 free (270 chunks); 1340816136 
used

That's only ~1.3GB, and ~1.1GB of that is the expression context. So
when you say 4GB, when does that happen and can you share stats showing
state at that point?

Yes, and this time the query is super complicated with many joins and 
tables involved. The query plan has 100 lines. Not easy to share for 
reproduce and I have my issue under control by adding some swap just 
in case. The swap space was never actually used.




Still, without the query plan we can hardly do any guesses about what
might be the issue.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: Out of Memory errors are frustrating as heck!

2019-10-06 Thread Tomas Vondra

On Fri, Aug 23, 2019 at 09:17:38AM -0400, Gunther wrote:
Hi all, I am connecting to a discussion back from April this year. My 
data has grown and now I am running into new out of memory situations. 
Meanwhile the world turned from 11.2 to 11.5 which I just installed 
only to find the same out of memory error.




As Tom already said, this seems like a quite independent issue. Next
time it'd be better to share it in a new thread, not to mix it up with
the old discussion.

Have any of the things discussed and proposed, especially this last 
one by Tomas Vondra, been applied to the 11 releases? Should I try 
these older patches from April?




Unfortunately, no. We're still discussing what would be the right fix
(it's rather tricky and the patches I shared were way too experimental
for that). But I'm pretty sure whatever we end up doing it's going to be
way too invasive for backpatch. I.e. the older branches will likely have
this issue until EOL.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services