improvement of Postgres-specific support in Ansible

2019-03-14 Thread Andrey Klychkov

Hello all !

There is the Ansible postgresql working group  
https://github.com/ansible/community/wiki/PostgreSQL
that was created a couple of months ago (thanks Ansible engineers Dag Wieers 
(@dagwieers) and John R. Barcker (@gundalow) for support).

Over the last 3 months we've added 4 new modules to 6 existing, these are:
postgresql_idx
postgresql_ping
postgresql_query
postgresql_tablespace
 
I've also completed 4 new modules that waiting for the second approval or 
reviews:
postgresql_facts ( good reviewed, has shipit, needs the second one )  
https://github.com/ansible/ansible/pull/51164
postgresql_set ( good reviewed, has shipit, needs the second one )  
https://github.com/ansible/ansible/pull/51875
postgresql_table  https://github.com/ansible/ansible/pull/52077
postgresql_membership  https://github.com/ansible/ansible/pull/53733

Would be better to see them merged until 21 March Ansible ver. 2.8 feature 
freeze.
 
Who's interested in significant improvement of Postgres-specific support, uses 
Ansible, and familiar with Python,
please, look at the pull requests above and approve them by writing shipit to a 
new comment or give your opinion how to make them better.

And also welcome to  https://github.com/ansible/community/wiki/PostgreSQL
The community pinboard for communication is here  
https://github.com/ansible/community/issues/435

If you have any questions, please, ask me directly by email.

Thank you!

-- 
Regards,
Andrew K.

Re: Notification or action when WAL archives fully restored and streaming replication started

2019-03-14 Thread Michael Cassaniti


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 14/3/19 5:15 pm, Michael Cassaniti wrote:
> > On 14/3/19 3:10 pm, Michael Paquier wrote: > > On Thu, Mar 14, 2019
at 02:59:38PM +1100, Michael Cassaniti wrote: >> I've got master/slave
replication setup between a few hosts. At any > >> point a slave could
become a master. I've got appropriate locking in > >> place using an
external system so that only one master can exist at a > >> time. I'm
having trouble determining when my slaves are in sync with > the >>
current master so that they can be a candidate for promotion. I > want
>> some form of indicator _without reading a log_ when streaming >> >
replication has started. > > pg_stat_replication on the primary, no? >
Here is its documentation: > >
https://www.postgresql.org/docs/11/monitoring-stats.html#MONITORING-STATS-VIEWS
> > -- > Michael This at least points me in the right direction. I need
> something on the receiver side, so the state column in >
pg_stat_wal_receiver might cover. I can check for state = streaming, but
> the DB won't accept connections in standby mode. > > Thanks for the
quick reply. Actually I was wrong. For some reason my testing initially
didn't want to allow read-only connections. Problem solved.
-BEGIN PGP SIGNATURE-

iG0EAREIAB0WIQT0DIHSqEo48gI0VT9pF1oDt4Q+5wUCXIoJgAAKCRBpF1oDt4Q+
53AnAOCQSbaOZy+K1qR8C3+EjBWshY0qwwVmHIG+Khy0AOCmAS+TgqE+mngBzpx5
jgMbyjaDn2RXvS14xW68
=sk4C
-END PGP SIGNATURE-





Monitoring warm standby

2019-03-14 Thread Damir Markovic
I have multiple warm standby replicas of PostgreSQL 11, used by analytics.
Some of the databases are not very active and I have a problem to figure
out if the recovery is in progress or actually there is nothing to recover
(because of master inactivity).

I set target time on replica like this:
recovery_target_time = '2019-03-13 12:00:00 UTC'

I check pg_is_wal_replay_paused() - false. Ok, replica is still catching up.

select max(modification) from pg_ls_waldir(); - '2019-03-13 10:25:15'. Ok,
I didn't receive any WAL files in a while.

select pg_last_xact_replay_timestamp(); - '2019-03-13 10:15:42.289668'.
Last transaction is even older.

The question now is how can I know if that last WAL file downloaded is
fully processed or maybe there is a long running transaction that is still
being in progress?

Thank you very much,
Damir


Retirar e-mail da lista

2019-03-14 Thread Angélica Barcellos
Bom dia. Recebo constantemente os e-mails,tem como retirar o meu e-mail dessa 
lista. Não quero receber mais nada. OBRIGADO

Angélica Barcelos Cardoso



Use case for BEFORE STATEMENT level trigger

2019-03-14 Thread Jitendra Loyal
I had been wondering as to where one can use BEFORE STATEMENT level
trigger, more so because one does not know (access) what rows are getting
affected. Only thing which comes to my mind is that if one wants to do
something at a table-level, then this trigger can be used; this is quite
unusual though as I have not experienced this need in more than two decades
of developing business applications. I am looking forward to some one who
can enlighten me with the use case because I may be missing something.

Thanks


Re:Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread fuzk
Dear Adrian,

My setting is as following.


max_parallel_workers_per_gather=32


I am looking forward to hearing from you.


Many thanks
Alan.









At 2019-03-13 22:31:11, "Adrian Klaver"  wrote:
>On 3/12/19 7:54 PM, fuzk wrote:
>> Dear Sir/Madam
>> 
>> I got an error when I execute the following select sentence.
>> Would you please solve the problem for me?
>
>What version of Postgres?
>
>> Thank you .
>> 
>> Alan Fu.
>> 
>> postgres=# \set VERBOSITY verbose
>> postgres=# SELECT 
>> round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as 
>> NUMERIC),4)||'KM' field_value from had_link;
>> 
>> ERROR:  XX000: cannot update SecondarySnapshot during a parallel operation
>> CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys 
>> WHERE srid = 4326 LIMIT 1"
>> parallel worker
>> LOCATION:  GetLatestSnapshot, snapmgr.c:387
>
>I'm guessing ST_length is not parallel safe.
>
>What is your setting for?:
>
>max_parallel_workers_per_gather
>
>> 
>> 
>
>
>-- 
>Adrian Klaver
>adrian.kla...@aklaver.com


Re: Retirar e-mail da lista

2019-03-14 Thread Edson Carlos Ericksson Richter



Em 14/03/2019 09:05, Angélica Barcellos escreveu:
Bom dia. Recebo constantemente os e-mails,tem como retirar o meu 
e-mail dessa lista. Não quero receber mais nada. OBRIGADO


Angélica Barcelos Cardoso



Hello, Angelica.

I'm not the list administrator, but I believe that you will have to run 
the unsubscribe process yourself.


Perhaps, being a english language list, is more probable that you get 
help if you ask your questions in english...


I hope being of kind help.

--

Oi, Angelica.

Eu não sou administrador desta lista, mas acredito que você precise 
fazer você mesmo o processo de "unsubscribe".


Além do mais, por ser uma lista em inglês, é provável que você obtenha 
auxílio se fizer a pergunta em inglês...


Espero ter ajudado.

--

Atenciosamente,

Edson





Re: Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Julien Rouhaud
On Thu, Mar 14, 2019 at 1:20 PM fuzk  wrote:
>
> Dear Adrian,
>
> My setting is as following.
>
> max_parallel_workers_per_gather=32
>
> I am looking forward to hearing from you.

What version of postgres and what version of postgis are you using ?

> At 2019-03-13 22:31:11, "Adrian Klaver"  wrote:
> >On 3/12/19 7:54 PM, fuzk wrote:
> >> postgres=# \set VERBOSITY verbose
> >> postgres=# SELECT
> >> round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as
> >> NUMERIC),4)||'KM' field_value from had_link;
> >>
> >> ERROR:  XX000: cannot update SecondarySnapshot during a parallel operation
> >> CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys
> >> WHERE srid = 4326 LIMIT 1"
> >> parallel worker
> >> LOCATION:  GetLatestSnapshot, snapmgr.c:387

I'm not familiar at all with geography, but if I read the code
correctly, ST_Length / geography_length will call
spheroid_init_from_srid(), which eventuallly calls GetProj4StringSPI()
that run a query using SPI.  I'm not sure why exactly
GetLatestSnapshot() is called here, or if SPI is really allowed in
parallel workers, but that's the problem here apparently.  I'm adding
Paul as he'll probably have a way better answer than me.



Re: Retirar e-mail da lista

2019-03-14 Thread Adrian Klaver

On 3/14/19 5:31 AM, Edson Carlos Ericksson Richter wrote:


Em 14/03/2019 09:05, Angélica Barcellos escreveu:
Bom dia. Recebo constantemente os e-mails,tem como retirar o meu 
e-mail dessa lista. Não quero receber mais nada. OBRIGADO


Angélica Barcelos Cardoso



Hello, Angelica.

I'm not the list administrator, but I believe that you will have to run 
the unsubscribe process yourself.


https://lists.postgresql.org/unsubscribe/



Perhaps, being a english language list, is more probable that you get 
help if you ask your questions in english...


I hope being of kind help.

--

Oi, Angelica.

Eu não sou administrador desta lista, mas acredito que você precise 
fazer você mesmo o processo de "unsubscribe".


Além do mais, por ser uma lista em inglês, é provável que você obtenha 
auxílio se fizer a pergunta em inglês...


Espero ter ajudado.

--

Atenciosamente,

Edson







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Adrian Klaver

On 3/14/19 6:14 AM, Julien Rouhaud wrote:

On Thu, Mar 14, 2019 at 1:20 PM fuzk  wrote:


Dear Adrian,

My setting is as following.

max_parallel_workers_per_gather=32

I am looking forward to hearing from you.


What version of postgres and what version of postgis are you using ?


At 2019-03-13 22:31:11, "Adrian Klaver"  wrote:

On 3/12/19 7:54 PM, fuzk wrote:

postgres=# \set VERBOSITY verbose
postgres=# SELECT
round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as
NUMERIC),4)||'KM' field_value from had_link;

ERROR:  XX000: cannot update SecondarySnapshot during a parallel operation
CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys
WHERE srid = 4326 LIMIT 1"
parallel worker
LOCATION:  GetLatestSnapshot, snapmgr.c:387


I'm not familiar at all with geography, but if I read the code
correctly, ST_Length / geography_length will call
spheroid_init_from_srid(), which eventuallly calls GetProj4StringSPI()
that run a query using SPI.  I'm not sure why exactly
GetLatestSnapshot() is called here, or if SPI is really allowed in
parallel workers, but that's the problem here apparently.  I'm adding
Paul as he'll probably have a way better answer than me.



https://trac.osgeo.org/postgis/ticket/4129
"Cannot update SecondarySnapshot during a parallel operation"

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Adrian Klaver

On 3/13/19 10:54 PM, fuzk wrote:

Dear Adrian,

My setting is as following.

max_parallel_workers_per_gather=32


Not sure if it is possible without affecting other operations, but you 
could set the above to 0 to temporarily disable parallel queries and see 
if that eliminates the error.


Also see:

https://trac.osgeo.org/postgis/ticket/4129



I am looking forward to hearing from you.

Many thanks
Alan.







At 2019-03-13 22:31:11, "Adrian Klaver"  wrote:

On 3/12/19 7:54 PM, fuzk wrote:

Dear Sir/Madam

I got an error when I execute the following select sentence.
Would you please solve the problem for me?


What version of Postgres?


Thank you .

Alan Fu.

postgres=# \set VERBOSITY verbose
postgres=# SELECT 
round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as 
NUMERIC),4)||'KM' field_value from had_link;


ERROR:  XX000: cannot update SecondarySnapshot during a parallel operation
CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys 
WHERE srid = 4326 LIMIT 1"

parallel worker
LOCATION:  GetLatestSnapshot, snapmgr.c:387


I'm guessing ST_length is not parallel safe.

What is your setting for?:

max_parallel_workers_per_gather







--
Adrian Klaver
adrian.kla...@aklaver.com







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: TPC-DS queries

2019-03-14 Thread Mark Johnson
I found this error in queries generated from templates query36.tpl,
query70.tpl, and query86.tpl.  The problem is, lochierarchy is an alias
defined in the SELECT statement, and the alias isn't being recognized in
the CASE statement.  PostgreSQL does not allow a column alias to be
referenced in a CASE statement, you have to use the actual column name.
Modify each of the queries throwing errors, and replace the lochierarchy
alias with the actual column name you see in the SELECT statement.
-Mark



On Mon, Mar 11, 2019 at 4:00 AM Tatsuo Ishii  wrote:

> > Hi,
> >
> > I think that the sql is not valid. Based on the order by
> > documentation, a column label cannot be used in an expression.
> >
> > fromhttps://www.postgresql.org/docs/11/queries-order.html
> >  > Note that an output column name has to stand alone, that is, it
> > cannot be used in an expression.
>
> Thanks. Yes, you are correct. The line should be something like:
>
>,case when grouping(i_category)+grouping(i_class) = 0 then i_category
> end
>
> > Regards
> > s.
> >
> > On 11.03.2019 06:30, Tatsuo Ishii wrote:
> >> I played with TPC-DS and found some of them can't be executed because
> >> of SQL errors and I am not sure why.
> >>
> >> For example with query 36:
> >>
> >> select
> >>  sum(ss_net_profit)/sum(ss_ext_sales_price) as gross_margin
> >> ,i_category
> >> ,i_class
> >> ,grouping(i_category)+grouping(i_class) as lochierarchy
> >> ,rank() over (
> >>  partition by grouping(i_category)+grouping(i_class),
> >>  case when grouping(i_class) = 0 then i_category end
> >>  order by sum(ss_net_profit)/sum(ss_ext_sales_price) asc) as
> >>  rank_within_parent
> >>   from
> >>  store_sales
> >> ,date_dim   d1
> >> ,item
> >> ,store
> >>   where
> >>  d1.d_year = 2000
> >>   and d1.d_date_sk = ss_sold_date_sk
> >>   and i_item_sk  = ss_item_sk
> >>   and s_store_sk  = ss_store_sk
> >>   and s_state in ('TN','TN','TN','TN',
> >>   'TN','TN','TN','TN')
> >>   group by rollup(i_category,i_class)
> >>   order by
> >> lochierarchy desc
> >>,case when lochierarchy = 0 then i_category end -- line 25 is here.
> >>,rank_within_parent
> >>limit 100;
> >> psql:query_0.sql:1935: ERROR:  column "lochierarchy" does not exist
> >> LINE 25:   ,case when lochierarchy = 0 then i_category end
> >>
> >> I have follwed the instruction here.
> >> https://ankane.org/tpc-ds
> >>
> >> PostgreSQL is master branch HEAD. For me, the SQL above looks to be
> >> valid.
> >>
> >> Best regards,
> >> --
> >> Tatsuo Ishii
> >> SRA OSS, Inc. Japan
> >> English: http://www.sraoss.co.jp/index_en.php
> >> Japanese:http://www.sraoss.co.jp
> >>
> >
>
>


Do all superuser processes count toward superuser_reserved_connections?

2019-03-14 Thread Jeremy Finzel
I don't find a clear mention in the docs of superuser processes that are
exempt from counting toward superuser_reserved_connections.  So I would
think that it's possible that postgres autovac workers ought to count
toward that.  Am I wrong about that?  I actually have the same question
about pglogical replication background workers and manager, which also run
as postgres.

Do I need to increase my limit to account for these workers, or are some or
all of these processes ignored by superuser_reserved_connections?

Many thanks!
Jeremy


Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Julien Rouhaud
On Thu, Mar 14, 2019 at 3:25 PM Adrian Klaver  wrote:
>
> On 3/14/19 6:14 AM, Julien Rouhaud wrote:
> > On Thu, Mar 14, 2019 at 1:20 PM fuzk  wrote:
> >>
> >> Dear Adrian,
> >>
> >> My setting is as following.
> >>
> >> max_parallel_workers_per_gather=32
> >>
> >> I am looking forward to hearing from you.
> >
> > What version of postgres and what version of postgis are you using ?
> >
> >> At 2019-03-13 22:31:11, "Adrian Klaver"  wrote:
> >>> On 3/12/19 7:54 PM, fuzk wrote:
>  postgres=# \set VERBOSITY verbose
>  postgres=# SELECT
>  round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as
>  NUMERIC),4)||'KM' field_value from had_link;
> 
>  ERROR:  XX000: cannot update SecondarySnapshot during a parallel 
>  operation
>  CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys
>  WHERE srid = 4326 LIMIT 1"
>  parallel worker
>  LOCATION:  GetLatestSnapshot, snapmgr.c:387
> >
> > I'm not familiar at all with geography, but if I read the code
> > correctly, ST_Length / geography_length will call
> > spheroid_init_from_srid(), which eventuallly calls GetProj4StringSPI()
> > that run a query using SPI.  I'm not sure why exactly
> > GetLatestSnapshot() is called here, or if SPI is really allowed in
> > parallel workers, but that's the problem here apparently.  I'm adding
> > Paul as he'll probably have a way better answer than me.
> >
>
> https://trac.osgeo.org/postgis/ticket/4129
> "Cannot update SecondarySnapshot during a parallel operation"

Ah, I didn't look there indeed, thanks!  So postgis people are already
aware, that's a good news.



Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Paul Ramsey
On Thu, Mar 14, 2019 at 8:43 AM Julien Rouhaud  wrote:
>
> On Thu, Mar 14, 2019 at 3:25 PM Adrian Klaver  
> wrote:
> >
> > On 3/14/19 6:14 AM, Julien Rouhaud wrote:
> > > On Thu, Mar 14, 2019 at 1:20 PM fuzk  wrote:
> > >>
> > >> Dear Adrian,
> > >>
> > >> My setting is as following.
> > >>
> > >> max_parallel_workers_per_gather=32
> > >>
> > >> I am looking forward to hearing from you.
> > >
> > > What version of postgres and what version of postgis are you using ?
> > >
> > >> At 2019-03-13 22:31:11, "Adrian Klaver"  
> > >> wrote:
> > >>> On 3/12/19 7:54 PM, fuzk wrote:
> >  postgres=# \set VERBOSITY verbose
> >  postgres=# SELECT
> >  round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as
> >  NUMERIC),4)||'KM' field_value from had_link;
> > 
> >  ERROR:  XX000: cannot update SecondarySnapshot during a parallel 
> >  operation
> >  CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys
> >  WHERE srid = 4326 LIMIT 1"
> >  parallel worker
> >  LOCATION:  GetLatestSnapshot, snapmgr.c:387
> > >
> > > I'm not familiar at all with geography, but if I read the code
> > > correctly, ST_Length / geography_length will call
> > > spheroid_init_from_srid(), which eventuallly calls GetProj4StringSPI()
> > > that run a query using SPI.  I'm not sure why exactly
> > > GetLatestSnapshot() is called here, or if SPI is really allowed in
> > > parallel workers, but that's the problem here apparently.  I'm adding
> > > Paul as he'll probably have a way better answer than me.
> > >
> >
> > https://trac.osgeo.org/postgis/ticket/4129
> > "Cannot update SecondarySnapshot during a parallel operation"
>
> Ah, I didn't look there indeed, thanks!  So postgis people are already
> aware, that's a good news.

Aware but unable to replicate, so nothing is happening on that front.
If you can create a set of data, SQL statements and configuration that
replicates, that would aid immensely.



Re: LDAP authenticated session terminated by signal 11: Segmentation fault, PostgresSQL server terminates other active server processes

2019-03-14 Thread Mike Yeap
Hi Noah, below is the output from one of the servers having this issue:

$ echo "select pg_backend_pid(); load 'dblink'; select pg_sleep(100)" |
psql -X &
[1] 9731

$ select pg_backend_pid(); load 'dblink'; select pg_sleep(100)
 pg_backend_pid

   9732
(1 row)

LOAD

$ gdb --batch --pid 9732 -ex 'info sharedlibrary ldap'

warning: .dynamic section for "/lib64/libldap-2.4.so.2" is not at the
expected address (wrong library or version mismatch?)

warning: .dynamic section for "/lib64/liblber-2.4.so.2" is not at the
expected address (wrong library or version mismatch?)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x7f1e7592dcf3 in __epoll_wait_nocancel () from /lib64/libc.so.6
>FromTo  Syms Read   Shared Object Library
0x7f1e7637d0f8  0x7f1e763ae51c  Yes (*) /lib64/libldap-2.4.so.2
0x7f1d9f2c16d0  0x7f1d9f2f5ae4  Yes (*)
 /lib64/libldap_r-2.4.so.2
(*): Shared library is missing debugging information.


Regards,
Mike Yeap

On Thu, Mar 14, 2019 at 1:42 PM Noah Misch  wrote:

> On Thu, Mar 14, 2019 at 05:18:49PM +1300, Thomas Munro wrote:
> > On Thu, Mar 7, 2019 at 4:19 PM Noah Misch  wrote:
> > > Has anyone else reproduced this?
> >
> > I tried, but could not reproduce this problem on "CentOS Linux release
> > 7.6.1810 (Core)" using OpenLDAP "2.4.44-21.el7_6" (same as Mike
> > reported, what yum install is currently serving up).
>
> > When exiting the session, I was expecting the backend to crash,
> > because it had executed libldap.so code during authentication, and
> > then it had linked in libldap_r.so via libpq.so while connecting via
> > postgres_fdw.  But it doesn't crash.  I wonder what is different for
> > Mike; am I missing something, or is there non-determinism here?
>
> The test is deterministic.  I'm guessing Mike's system is finding ldap
> libraries other than the usual system ones.  Mike, would you check as
> follows?
>
> $ echo "select pg_backend_pid(); load 'dblink'; select pg_sleep(100)" |
> psql -X &
> [1] 2530123
>   pg_backend_pid
> 
> 2530124
> (1 row)
>
> LOAD
>
> $ gdb --batch --pid 2530124 -ex 'info sharedlibrary ldap'
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x76303463 in __epoll_wait_nocancel () from /lib64/libc.so.6
> FromTo  Syms Read   Shared Object Library
> 0x765e1ee0  0x76613304  Yes (*) /lib64/libldap-2.4.so.2
> 0x7fffe998f6d0  0x7fffe99c3ae4  Yes (*)
>  /lib64/libldap_r-2.4.so.2
> (*): Shared library is missing debugging information.
>


Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Julien Rouhaud
On Thu, Mar 14, 2019 at 4:59 PM Paul Ramsey  wrote:
>
> On Thu, Mar 14, 2019 at 8:43 AM Julien Rouhaud  wrote:
> >
> > On Thu, Mar 14, 2019 at 3:25 PM Adrian Klaver  
> > wrote:
> > >
> > > On 3/14/19 6:14 AM, Julien Rouhaud wrote:
> > > > On Thu, Mar 14, 2019 at 1:20 PM fuzk  wrote:
> > > >>
> > > >> Dear Adrian,
> > > >>
> > > >> My setting is as following.
> > > >>
> > > >> max_parallel_workers_per_gather=32
> > > >>
> > > >> I am looking forward to hearing from you.
> > > >
> > > > What version of postgres and what version of postgis are you using ?
> > > >
> > > >> At 2019-03-13 22:31:11, "Adrian Klaver"  
> > > >> wrote:
> > > >>> On 3/12/19 7:54 PM, fuzk wrote:
> > >  postgres=# \set VERBOSITY verbose
> > >  postgres=# SELECT
> > >  round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as
> > >  NUMERIC),4)||'KM' field_value from had_link;
> > > 
> > >  ERROR:  XX000: cannot update SecondarySnapshot during a parallel 
> > >  operation
> > >  CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys
> > >  WHERE srid = 4326 LIMIT 1"
> > >  parallel worker
> > >  LOCATION:  GetLatestSnapshot, snapmgr.c:387
> > > >
> > > > I'm not familiar at all with geography, but if I read the code
> > > > correctly, ST_Length / geography_length will call
> > > > spheroid_init_from_srid(), which eventuallly calls GetProj4StringSPI()
> > > > that run a query using SPI.  I'm not sure why exactly
> > > > GetLatestSnapshot() is called here, or if SPI is really allowed in
> > > > parallel workers, but that's the problem here apparently.  I'm adding
> > > > Paul as he'll probably have a way better answer than me.
> > > >
> > >
> > > https://trac.osgeo.org/postgis/ticket/4129
> > > "Cannot update SecondarySnapshot during a parallel operation"
> >
> > Ah, I didn't look there indeed, thanks!  So postgis people are already
> > aware, that's a good news.
>
> Aware but unable to replicate, so nothing is happening on that front.
> If you can create a set of data, SQL statements and configuration that
> replicates, that would aid immensely.

I also tried to reproduce on latest postgis 2.4 / pg11 with anything
even slightly related to what could call GetLatestSnapshot() with
force_parallel_mode enabled and parallel_leader_participation disabled
(also postgis installcheck), and I couldn't hit this problem (while
I'm sure that the underlying query was run).  I start to think that
this may be due to a third-party module loaded that could call
GetLatestSnapshot(), otherwise I have no  explanation.



Postgres 10 and auto vacuum

2019-03-14 Thread Simon Windsor
Hi

We have a busy postgres 10 Db with a principal table that holds about 15 
million active rows and has about 90M Inserts|Updates|Deletes a day.

All performs well, except a small number of monitoring SQL statements of the 
nature

select code, max(timestamp) from mainTable group by code;

After running ANALYZE or VACUUM ANALYZE on the table, the Query Plan uses the 
an index on columns (code, timestamp) and takes less than 1s to report the 
latest value for each code.

However, after about 20 minutes, the Query Plan has changed to be a full table 
scan mainTable and this takes about 30s. This is very puzzling because the 
nature of the table and data has not changed, although many values and 5M 
changes may have happened. The only possible cause can be auto vacuum. 

Playing with 

autovacuum_analyze_threshold
autovacuum_analyze_scale_factor
default_statistics_target

What is the best option to ensure that the table statistics stay upto data and 
that the best Query Plan is generated.One option would be to use CRON and 
regenerate the table statistics every 15 minutes, or disable auto vacuum 
ANALYZE function, but neither of these options feel write.

How does the auto vacuum analyze function work? Does it

use a subset of the whole table to regenerate the table statistics
use a subset of the rows changed to regenerate the table statistics
use a subset of the rows changed to update the table statistics
or something else
Any help or suggestions with this would be appreciated
 
Simon


Simon Windsor
Eml: simon.wind...@cornfield.org.uk 

 
“There is nothing in the world that some man cannot make a little worse and 
sell a little cheaper, and he who considers price only is that man's lawful 
prey.”








Re: Postgres 10 and auto vacuum

2019-03-14 Thread Michael Lewis
>
> *autovacuum_analyze_threshold*
> *autovacuum_analyze_scale_factor*
>
Changing these will impact how often the table is analyzed based on the
rough count of changed rows. You may want to adjust autovacuum settings as
well so that dead space can be reused.


> *default_statistics_target*
>
Increasing this from default 100 will result in longer planning time, but
you may get a better plan (more consistently).


> What is the best option to ensure that the table statistics stay upto data
> and that the best Query Plan is generated.One option would be to use CRON
> and regenerate the table statistics every 15 minutes, or disable auto
> vacuum ANALYZE function, but neither of these options feel write.
>
You can check if autovacuum is working on this table by checking
pg_stat_user_tables or turning on logging of autovacuum and reviewing your
logs.

How does the auto vacuum analyze function work? Does it
>
>
>- use a subset of the whole table to regenerate the table statistics
>
> Yes. It scans 300*default_statistics_target rows and for each column
estimates null fraction, most common values and the frequency of those,
histogram_bounds and other info found in pg_stats.


Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Thomas Munro
On Fri, Mar 15, 2019 at 6:09 AM Julien Rouhaud  wrote:

> > > > https://trac.osgeo.org/postgis/ticket/4129

> I also tried to reproduce on latest postgis 2.4 / pg11 with anything
> even slightly related to what could call GetLatestSnapshot() with
> force_parallel_mode enabled and parallel_leader_participation disabled
> (also postgis installcheck), and I couldn't hit this problem (while
> I'm sure that the underlying query was run).  I start to think that
> this may be due to a third-party module loaded that could call
> GetLatestSnapshot(), otherwise I have no  explanation.

I don't know much about PostGIS but this does seem very strange.
Comment #7 in the Trac bug says that the error occurs only
intermittently.  Hmm, so what could reach GetLatestSnapshot() only
occasionally...?  Generally that is used for things that are doing RI
checks and other special things involving write queries, but these
aren't write queries, or shouldn't be.  It should be perfectly OK for
SPI stuff to happen inside PARALLEL SAFE functions, as long as they
only do read-only queries; I hope that any SRID lookup-type activity
hiding in these functions is just doing read-only work (for example
we've found a few core function that we had to mark as UNSAFE after we
realised that they could run user-supplied queries that could do
anything).

A fast way to find out would be to get one of these people who can
reproduce the problem to recompile PostgreSQL with that error changed
to a PANIC, and examine the resulting smoldering core.  (Someone had a
proposal to make PostgreSQL errors optionally dump the function call
stack with backtrace(3) even in regular production builds, which would
make this kind of investigations go faster, I wonder what happened to
that.)

-- 
Thomas Munro
https://enterprisedb.com



Re: Autovacuum Transaction Wraparound

2019-03-14 Thread Adrian Klaver

On 3/14/19 11:45 AM, Perumal Raj wrote:

Please reply to list also.
Ccing list.


Thanks Adrian for the reply,

Yes , i went through the document.

My Only Worry is , Will i hit performance issue once i reach 200M 
Age(default) even i have only static table.


If is truly static then there will be no or little xids generated  so 
the age will not be reached or reached slowly. Otherwise the normal 
autovacuuming will keep the xids under control. The caveat being an 
operation or operations that generate a lot of xids faster then the 
normal autovac settings can handle.






On Wed, Mar 13, 2019 at 12:23 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 3/11/19 1:24 PM, Perumal Raj wrote:
 > Hi Adrian
 >
 > What was the full message?
 >
 >             autovacuum: VACUUM (to prevent wraparound)
 >
 > Though i am running vacuum manually (nowadays) and autovacuum is
running
 > perfectly once its threshold reaches.
 >
 > What will happen if my DB reaches 200M transaction age again ? (
Here my
 > understanding is no dead tuples to cleanup --- I may be missing full
 > concept , Please correct me if i am wrong) .
 >
 > What will be impact to DB ( Performance ) During Vacuum freeze ( My
 > Assumption is autovacuum will run "vacuum freeze" once DB age
reached
 > 200M ) ?

I would read this:


https://www.postgresql.org/docs/9.2/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

I believe it will answer most of your questions.

 >
 > When should i consider to increase pg_settings value with respect to
 > Autovacuum ?
 >
 > Regards,
 >
 >



-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Camel case identifiers and folding

2019-03-14 Thread Steve Haresnape
I'm porting a sql server database to postgresql 9.6. My camelCase
identifiers are having their humps removed. This is disconcerting and sad.

Is there a cure for this?

I don't want to quote my identifiers unless I have to. I don't want to use
underscores. Everywhere else I write code I use camel case. I makes me feel
warm inside.

This is folding in the same sense Procrustes folded his guests to fit his
bed.

The sql standard has some dumb stuff in it. Do not appeal to it. It can be
safely ignored in this regard (by Microsoft, and anyone else wishing to
prevent the propagation of stupidity).

The only mechanism available to put any semantics into the database is via
the naming of identifiers. Why screw with that? Imagine what would happen
if your IDE decided to do that to your java code.


Re: Camel case identifiers and folding

2019-03-14 Thread David G. Johnston
On Thu, Mar 14, 2019 at 4:07 PM Steve Haresnape
 wrote:
>
> I'm porting a sql server database to postgresql 9.6. My camelCase identifiers 
> are having their humps removed. This is disconcerting and sad.
>
> Is there a cure for this?

No

>I don't want to quote my identifiers unless I have to.

PostgreSQL made the choice long ago to normalize unquoted identifiers
to lower case.  Quoting them will preserve whatever you type,
including case.

David J.



Re: LDAP authenticated session terminated by signal 11: Segmentation fault, PostgresSQL server terminates other active server processes

2019-03-14 Thread Noah Misch
On Fri, Mar 15, 2019 at 12:10:59AM +0800, Mike Yeap wrote:
> Hi Noah, below is the output from one of the servers having this issue:
> 
> $ echo "select pg_backend_pid(); load 'dblink'; select pg_sleep(100)" | psql 
> -X &
> [1] 9731
> 
> $ select pg_backend_pid(); load 'dblink'; select pg_sleep(100)
>  pg_backend_pid
> 
>            9732
> (1 row)
> 
> LOAD
> 
> $ gdb --batch --pid 9732 -ex 'info sharedlibrary ldap'
> 
> warning: .dynamic section for "/lib64/libldap-2.4.so.2" is not at the 
> expected address (wrong library or version mismatch?)
> 
> warning: .dynamic section for "/lib64/liblber-2.4.so.2" is not at the 
> expected address (wrong library or version mismatch?)
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x7f1e7592dcf3 in __epoll_wait_nocancel () from /lib64/libc.so.6
> From                To                  Syms Read   Shared Object Library
> 0x7f1e7637d0f8  0x7f1e763ae51c  Yes (*)     /lib64/libldap-2.4.so.2
> 0x7f1d9f2c16d0  0x7f1d9f2f5ae4  Yes (*)     /lib64/libldap_r-2.4.so.2
> (*): Shared library is missing debugging information.

Thanks.  That rules out my guess.  I don't have another guess at this time.



Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Tom Lane
Thomas Munro  writes:
> A fast way to find out would be to get one of these people who can
> reproduce the problem to recompile PostgreSQL with that error changed
> to a PANIC, and examine the resulting smoldering core.  (Someone had a
> proposal to make PostgreSQL errors optionally dump the function call
> stack with backtrace(3) even in regular production builds, which would
> make this kind of investigations go faster, I wonder what happened to
> that.)

Can't speak for other people, but I remember experimenting with
glibc's backtrace(3) and being so underwhelmed by the usefulness
of the information presented that I thought incorporating it would
be mostly a waste of effort.  Maybe there's an argument that it's
better than nothing at all; but I think we'd still be driven to
asking people to get stack traces with better tools.

regards, tom lane



Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-14 Thread Adrian Klaver

On 3/14/19 7:12 PM, fuzk wrote:
When I set max_parallel_workers_per_gather=0, the select statement can 
execute successfully. The problem has been solved. Thank you all very 
much. Alan





Well the problem has been hidden, not solved. It did establish that 
invoking parallel query has something to do with it. There also the fact 
that by max_parallel_workers_per_gather is set to 0 by default. So the 
setting you had previously(32) would imply that someone wanted to use 
parallel query.  If it is re-enabled then you are back to the error.  To 
help solve the problem for yourself and others could you supply relevant 
information to the this problem report:


https://trac.osgeo.org/postgis/ticket/4129



--
Adrian Klaver
adrian.kla...@aklaver.com



Facing issue in using special characters

2019-03-14 Thread M Tarkeshwar Rao
Hi all,

Facing issue in using special characters. We are trying to insert records to a 
remote Postgres Server and our application not able to perform this because of 
errors.
It seems that issue is because of the special characters that has been used in 
one of the field of a row.

Regards
Tarkeshwar


Re: Facing issue in using special characters

2019-03-14 Thread David G. Johnston
On Thursday, March 14, 2019, M Tarkeshwar Rao 
wrote:
>
> Facing issue in using special characters. We are trying to insert records
> to a remote Postgres Server and our application not able to perform this
> because of errors.
>
> It seems that issue is because of the special characters that has been
> used in one of the field of a row.
>

Emailing -general ONLY is both sufficient and polite.  Providing more
detail, and ideally an example, is necessary.

David J.


Re: Camel case identifiers and folding

2019-03-14 Thread Thomas Kellerer
Steve Haresnape schrieb am 15.03.2019 um 00:06:
> I'm porting a sql server database to postgresql 9.6. My camelCase
> identifiers are having their humps removed. This is disconcerting and
> sad.
> 
> Is there a cure for this?
> 
> I don't want to quote my identifiers unless I have to.

You don't need to. 
Nothing prevents you from writing CustomerOrders if the table was not created 
with double quotes. 

The only thing is, that the name is stored differently. 
But you can still use CamelCase all over the place if stick to unquoted 
identifiers.