Re: Details after Load Peak was: OT: Performance of VM

2018-02-06 Thread Alan Hodgson
On Tue, 2018-02-06 at 15:31 +0100, Thomas Güttler wrote:
> 
 But one thing is still unclear. Imagine I see a peak in the chart. The peak
> was some hours ago. AFAIK sar has only the aggregated numbers.
> 
> But I need to know details if I want to answer the question "Why?". The peak
> has gone and ps/top/iotop don't help me anymore.
> 

The typical solution is to store stats on everything you can think of
with munin, cacti, ganglia, or similar systems.

I know with ganglia at least, in addition to all the many details it
already tracks on a system and the many plugins already available for
it, you can write your own plugins or simple agents, so you can keep
stats on anything you can code around.

Munin's probably the easiest to try out, though.

Re: Slow Bitmap Index Scan

2018-12-03 Thread Alan Hodgson
On Mon, 2018-12-03 at 18:41 +, Scott Rankin wrote:
> Upon further analysis, this is - unsurprisingly - taking place when we have 
> multiple prefixed search terms in a ts_query going against a tsvector index.
> 
> We have roughly 30 million rows in the table, and the search column is 
> basically a concatenation of a location's name (think "Walmart #123456") and 
> its street address.
> 
> We use these searches mostly for autocompleting of a location search.  So the 
> search for that record above might be "Walmart 123", which we change to be 
> to_tsquery('walmart:* &123:*').  We prefix both terms to correct for 
> misspellings or lazy typing.
> 
> Is it unrealistic to think that we could have sub-1000ms searches against 
> that size of a table?
> 

We've found trigram indexes to be much faster and more useful for these
types of searches than full-text.

https://www.postgresql.org/docs/10/pgtrgm.html

Might be worth a try, if you haven't tested them before.

Re: Problem with indices from 10 to 13

2021-09-28 Thread Alan Hodgson
Em ter., 28 de set. de 2021 às 12:40, Daniel Diniz  
escreveu:
> > Hello I migrated from postgres 10 to 13 and I noticed that there was a big
> > increase in a querie that I use, I did explain in 10 and 13 and the
> > difference is absurd, the indices and data are the same in 2. I've re-
> > created and re-indexed but I don't know what changed from 10 to 13 which
> > made the performance so bad, I don't know if it needs some extra parameter
> > in some conf on 13.
> > 
> > Postgres 13
> > 
> > "QUERY PLAN"
> > "Limit  (cost=1.13..26855.48 rows=30 width=137) (actual
> > time=10886.585..429803.463 rows=4 loops=1)"
> > "  ->  Nested Loop  (cost=1.13..19531164.71 rows=21819 width=137) (actual
> > time=10886.584..429803.457 rows=4 loops=1)"
> > "        Join Filter: (h.ult_eve_id = ev.evento_id)"
> > "        Rows Removed by Join Filter: 252"
> > "        ->  Nested Loop  (cost=1.13..19457514.32 rows=21819 width=62)
> > (actual time=10886.326..429803.027 rows=4 loops=1)"
> > "              ->  Nested Loop  (cost=0.85..19450780.70 rows=21819
> > width=55) (actual time=10886.259..429802.908 rows=4 loops=1)"
> > "                    ->  Index Scan Backward using hawbs_pkey on hawbs h
> >  (cost=0.57..19444209.67 rows=21819 width=46) (actual
> > time=10886.119..429802.676 rows=4 loops=1)"
> > "                          Filter: ((tipo_hawb_id = ANY
> > ('{1,10,3}'::integer[])) AND ((nome_des)::text ~~*
> > convert_from('\x255354455048414e592053544f4557204c45414e44524f25'::bytea,
> > 'LATIN1'::name)))"
> > "                          Rows Removed by Filter: 239188096"
> 
> Index Scan Backward looks suspicious to me.
> 239,188,096  rows removed by filter it's a lot of work.
> 
> Do you, run analyze?

PostgreSQL has an unfortunate love of scanning the pkey index backwards when
you use LIMIT.

Try pushing your actual query into a subquery (with an offset 0 to prevent it
being optimized out) and then do the LIMIT outside it.