EINTR while resizing dsm segment.

2020-04-02 Thread Kyotaro Horiguchi
I provided the subject, and added -hackers.

> Hello,
> I am running postgres 11.5 and we were having issues with shared segments.
> So I increased the max_connection as suggested by you guys and reduced my
> work_mem to 600M.
> 
> Right now instead, it is the second time I see this error :
> 
> ERROR:  could not resize shared memory segment "/PostgreSQL.2137675995" to
> 33624064 bytes: Interrupted system call

The function posix_fallocate is protected against EINTR.

| do
| {
|   rc = posix_fallocate(fd, 0, size);
| } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));

But not for ftruncate and write. Don't we need to protect them from
ENTRI as the attached?

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From 590b783f93995bfd1ec05dbcb2805a577372604d Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Thu, 2 Apr 2020 17:09:35 +0900
Subject: [PATCH] Protect dsm_impl from EINTR

dsm_impl functions should not error-out by EINTR.
---
 src/backend/storage/ipc/dsm_impl.c | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/src/backend/storage/ipc/dsm_impl.c b/src/backend/storage/ipc/dsm_impl.c
index 1972aecbed..f4e7350a5e 100644
--- a/src/backend/storage/ipc/dsm_impl.c
+++ b/src/backend/storage/ipc/dsm_impl.c
@@ -360,7 +360,11 @@ dsm_impl_posix_resize(int fd, off_t size)
 	int			rc;
 
 	/* Truncate (or extend) the file to the requested size. */
-	rc = ftruncate(fd, size);
+	do
+	{
+		rc = ftruncate(fd, size);
+	} while (rc < 0 && errno == EINTR &&
+			 !(ProcDiePending || QueryCancelPending));
 
 	/*
 	 * On Linux, a shm_open fd is backed by a tmpfs file.  After resizing with
@@ -874,11 +878,19 @@ dsm_impl_mmap(dsm_op op, dsm_handle handle, Size request_size,
 		while (success && remaining > 0)
 		{
 			Size		goal = remaining;
+			Size		rc;
 
 			if (goal > ZBUFFER_SIZE)
 goal = ZBUFFER_SIZE;
 			pgstat_report_wait_start(WAIT_EVENT_DSM_FILL_ZERO_WRITE);
-			if (write(fd, zbuffer, goal) == goal)
+
+			do
+			{
+rc = write(fd, zbuffer, goal);
+			} while (rc < 0 && errno == EINTR &&
+	 !(ProcDiePending || QueryCancelPending));
+
+			if (rc == goal)
 remaining -= goal;
 			else
 success = false;
-- 
2.18.2



Re: postgres: archiver process failed on 000000010000020A00000073

2020-04-02 Thread Silvio Fabi - NBS srl
thanks Alvaro,
following your instructions I solved the problem.

Il 01/04/2020 19:36, Alvaro Herrera ha scritto:
> On 2020-Apr-01, Silvio Fabi - NBS srl wrote:
>
>> WAL Archive process on my DB Postgresql 10 is not working.
>> the WAL Archive file 0001020A0073 was accidentally deleted
>> and there is no backup.
>> Postgresql tries to run the following command:
>> rsync -a pg_wal/0001020A0073
>> barman@database:/var/lib/barman/database/incoming/0001020A0073
> Ouch.  This is not a barman problem; accidentally a WAL file is
> synonymous with corrupting your database.  Any existing backup that
> needs that file is now invalid.  I suggest you run CHECKPOINT and then
> take a new backup.  I think removing the 0001020A0073.ready
> file from pg_wal should prevent future attempts at archiving it, so that
> subsequent WAL files are archived.
>
> By all means, *do not* lose any more WAL files.  It is a very dangerous
> thing to do.
>

-- 
Silvio Fabi
NBS srl
Via Val Tiberina 23/A
San Benedetto del Tronto (AP)
Telefono: 0735/7626242
Cellulare: 345/0142435






Re: using a common key value on both sides of a union ?

2020-04-02 Thread Laurenz Albe
On Wed, 2020-04-01 at 19:39 -0400, David Gauthier wrote:
> psql (9.6.7, server 11.3) on linux
> 
> I want to do something like this 
> (intentionally bad sql but will illustrate the need)
> 
> select s.name,s.grade from students s where s.class='math'
> union
> select 'whole class', class_grade from all_classes where class=s.class
> 
> Of course it's that "where class=s.class" that's a foul.
> 
> In English, I want a list of each math student and their grade and then 
> append one more record for the entire class, a record that comes from a 
> different table but narrowed down to that one class.
> 
> I don't care if union isn't the right approach.  Anything that works is 
> welcome !

SELECT s.name, s.grade
FROM students s
WHERE s.class = 'math'
UNION
SELECT 'whole class', a.class_grade
FROM all_classes a
WHERE EXISTS (SELECT 1 FROM students s1
  WHERE a.class = s1.class
AND s1.class = 'math');

I deliberately ignored that the condition could be simplified substantially.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Andrus

Hi!

After upgrading to Postgres 12   statement

update temprid set
   ContactFirstName =unnest(xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x))::text,
   yhik =unnest(xpath(
'/E-Document/Document/DocumentItem/ItemEntry/BaseUnit/text()',x))::text
from t

throws error

set-returning functions are not allowed in UPDATE

How to fix this ?


Maybe there is simply change which makes this statement work ?
Result should by any value of xpath expression in case if xpath returns 
multiple values

In Postgres 9.1 it worked.

Andrus. 






Re: Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Laurenz Albe
On Thu, 2020-04-02 at 17:48 +0300, Andrus wrote:
> After upgrading to Postgres 12   statement
> 
> update temprid set
> ContactFirstName =unnest(xpath(
>  
> '/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x))::text,
> yhik =unnest(xpath(
>  
> '/E-Document/Document/DocumentItem/ItemEntry/BaseUnit/text()',x))::text
>  from t
> 
> throws error
> 
> set-returning functions are not allowed in UPDATE
> 
> How to fix this ?

Simply replace

   SET col = unnest(array_value)

with

   SET col = array_value[1]

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





too many clients already

2020-04-02 Thread Abraham, Danny
Hi,

Will appreciate a hint here.

Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  
sorry, too many clients already"
and transient difficulty to log in.

Happens on all PG versions (Tested 9.5,10.4,11.5)

Big installation: max_connections is 1200,  shared_buffers is 2GB 

But .. select count(*) from pg_stat_activity   is only 66.

Thanks

Danny




Re: too many clients already

2020-04-02 Thread Rob Sargent



> On Apr 2, 2020, at 9:06 AM, Abraham, Danny  wrote:
> 
> Hi,
> 
> Will appreciate a hint here.
> 
> Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  
> sorry, too many clients already"
> and transient difficulty to log in.
> 
> Happens on all PG versions (Tested 9.5,10.4,11.5)
> 
> Big installation: max_connections is 1200,  shared_buffers is 2GB 
> 
> But .. select count(*) from pg_stat_activity   is only 66.
> 
> Thanks
> 
> Danny
> 
> 
Lots of idle, kept-alive clients?  Do you have a connection pooler (e.g. 
pg-bouncer)?



Re: too many clients already

2020-04-02 Thread Adrian Klaver

On 4/2/20 8:06 AM, Abraham, Danny wrote:

Hi,

Will appreciate a hint here.

Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  sorry, 
too many clients already"
and transient difficulty to log in.

Happens on all PG versions (Tested 9.5,10.4,11.5)

Big installation: max_connections is 1200,  shared_buffers is 2GB

But .. select count(*) from pg_stat_activity   is only 66.


On the chance pg_stat_activity is lying to you what does:

ps ax | grep post

show for backends?

For example:

ps ax | grep post
 1217 ?Ss 0:00 /usr/lib/postfix/bin//master -w
 1233 ?S  0:00 /usr/local/pgsql12/bin/postmaster -D 
/usr/local/pgsql12/data

 1253 ?Ss 0:00 postgres: logger
 1277 ?Ss 0:00 postgres: checkpointer
 1278 ?Ss 0:00 postgres: background writer
 1279 ?Ss 0:00 postgres: walwriter
 1280 ?Ss 0:00 postgres: autovacuum launcher
 1281 ?Ss 0:00 postgres: stats collector
 1282 ?Ss 0:00 postgres: logical replication launcher 
*4693 ?Ss 0:00 postgres: aklaver task_manager [local] idle

*4907 ?Ss 0:00 postgres: aklaver production [local] idle



Thanks

Danny





--
Adrian Klaver
adrian.kla...@aklaver.com




RE: Re: too many clients already

2020-04-02 Thread Abraham, Danny
No pg-bouncer or connection pooling.
ps -elf | grep postgres | grep idle | wc -l  ==>61

and BTW: Running, say 500 one command psql in parallel will have the same 
affect..

-Original Message-
From: Rob Sargent  
Sent: Thursday, April 02, 2020 6:10 PM
To: Abraham, Danny 
Cc: pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already



> On Apr 2, 2020, at 9:06 AM, Abraham, Danny  wrote:
> 
> Hi,
> 
> Will appreciate a hint here.
> 
> Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  
> sorry, too many clients already"
> and transient difficulty to log in.
> 
> Happens on all PG versions (Tested 9.5,10.4,11.5)
> 
> Big installation: max_connections is 1200,  shared_buffers is 2GB 
> 
> But .. select count(*) from pg_stat_activity   is only 66.
> 
> Thanks
> 
> Danny
> 
> 
Lots of idle, kept-alive clients?  Do you have a connection pooler (e.g. 
pg-bouncer)?




Re: too many clients already

2020-04-02 Thread Adrian Klaver

On 4/2/20 8:22 AM, Abraham, Danny wrote:

No pg-bouncer or connection pooling.
ps -elf | grep postgres | grep idle | wc -l  ==>61

and BTW: Running, say 500 one command psql in parallel will have the same 
affect..


Hmm. In psql on the cluster in question what does below return?:

show max_connections;



-Original Message-
From: Rob Sargent 
Sent: Thursday, April 02, 2020 6:10 PM
To: Abraham, Danny 
Cc: pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already




On Apr 2, 2020, at 9:06 AM, Abraham, Danny  wrote:

Hi,

Will appreciate a hint here.

Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  sorry, 
too many clients already"
and transient difficulty to log in.

Happens on all PG versions (Tested 9.5,10.4,11.5)

Big installation: max_connections is 1200,  shared_buffers is 2GB

But .. select count(*) from pg_stat_activity   is only 66.

Thanks

Danny



Lots of idle, kept-alive clients?  Do you have a connection pooler (e.g. 
pg-bouncer)?





--
Adrian Klaver
adrian.kla...@aklaver.com




RE: Re: too many clients already

2020-04-02 Thread Abraham, Danny
Big installation: max_connections is 1200,  shared_buffers is 2GB

-Original Message-
From: Adrian Klaver  
Sent: Thursday, April 02, 2020 6:30 PM
To: Abraham, Danny ; pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already

On 4/2/20 8:22 AM, Abraham, Danny wrote:
> No pg-bouncer or connection pooling.
> ps -elf | grep postgres | grep idle | wc -l  ==>61
> 
> and BTW: Running, say 500 one command psql in parallel will have the same 
> affect..

Hmm. In psql on the cluster in question what does below return?:

show max_connections;

> 
> -Original Message-
> From: Rob Sargent 
> Sent: Thursday, April 02, 2020 6:10 PM
> To: Abraham, Danny 
> Cc: pgsql-gene...@postgresql.org
> Subject: [EXTERNAL] Re: too many clients already
> 
> 
> 
>> On Apr 2, 2020, at 9:06 AM, Abraham, Danny  wrote:
>>
>> Hi,
>>
>> Will appreciate a hint here.
>>
>> Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  
>> sorry, too many clients already"
>> and transient difficulty to log in.
>>
>> Happens on all PG versions (Tested 9.5,10.4,11.5)
>>
>> Big installation: max_connections is 1200,  shared_buffers is 2GB
>>
>> But .. select count(*) from pg_stat_activity   is only 66.
>>
>> Thanks
>>
>> Danny
>>
>>
> Lots of idle, kept-alive clients?  Do you have a connection pooler (e.g. 
> pg-bouncer)?
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


Re: too many clients already

2020-04-02 Thread Adrian Klaver

On 4/2/20 8:35 AM, Abraham, Danny wrote:

Big installation: max_connections is 1200,  shared_buffers is 2GB


Have you confirmed that the above is actually in effect by doing?:

show max_connections;



-Original Message-
From: Adrian Klaver 
Sent: Thursday, April 02, 2020 6:30 PM
To: Abraham, Danny ; pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already

On 4/2/20 8:22 AM, Abraham, Danny wrote:

No pg-bouncer or connection pooling.
ps -elf | grep postgres | grep idle | wc -l  ==>61

and BTW: Running, say 500 one command psql in parallel will have the same 
affect..


Hmm. In psql on the cluster in question what does below return?:

show max_connections;



-Original Message-
From: Rob Sargent 
Sent: Thursday, April 02, 2020 6:10 PM
To: Abraham, Danny 
Cc: pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already




On Apr 2, 2020, at 9:06 AM, Abraham, Danny  wrote:

Hi,

Will appreciate a hint here.

Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  sorry, 
too many clients already"
and transient difficulty to log in.

Happens on all PG versions (Tested 9.5,10.4,11.5)

Big installation: max_connections is 1200,  shared_buffers is 2GB

But .. select count(*) from pg_stat_activity   is only 66.

Thanks

Danny



Lots of idle, kept-alive clients?  Do you have a connection pooler (e.g. 
pg-bouncer)?








--
Adrian Klaver
adrian.kla...@aklaver.com




RE: Re: too many clients already

2020-04-02 Thread Abraham, Danny
va-tlv-ctm-qa22.isr.bmc.com% sql
psql: FATAL:  sorry, too many clients already
va-tlv-ctm-qa22.isr.bmc.com% sql
psql (11.5)
Type "help" for help.

ctrlmdb=> show max_connections;
 max_connections
-
 1200
(1 row)

ctrlmdb=> show shared_buffers;
 shared_buffers

 2000MB
(1 row)



-Original Message-
From: Adrian Klaver  
Sent: Thursday, April 02, 2020 6:37 PM
To: Abraham, Danny ; pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already

On 4/2/20 8:35 AM, Abraham, Danny wrote:
> Big installation: max_connections is 1200,  shared_buffers is 2GB

Have you confirmed that the above is actually in effect by doing?:

show max_connections;

> 
> -Original Message-
> From: Adrian Klaver 
> Sent: Thursday, April 02, 2020 6:30 PM
> To: Abraham, Danny ; pgsql-gene...@postgresql.org
> Subject: [EXTERNAL] Re: too many clients already
> 
> On 4/2/20 8:22 AM, Abraham, Danny wrote:
>> No pg-bouncer or connection pooling.
>> ps -elf | grep postgres | grep idle | wc -l  ==>61
>>
>> and BTW: Running, say 500 one command psql in parallel will have the same 
>> affect..
> 
> Hmm. In psql on the cluster in question what does below return?:
> 
> show max_connections;
> 
>>
>> -Original Message-
>> From: Rob Sargent 
>> Sent: Thursday, April 02, 2020 6:10 PM
>> To: Abraham, Danny 
>> Cc: pgsql-gene...@postgresql.org
>> Subject: [EXTERNAL] Re: too many clients already
>>
>>
>>
>>> On Apr 2, 2020, at 9:06 AM, Abraham, Danny  wrote:
>>>
>>> Hi,
>>>
>>> Will appreciate a hint here.
>>>
>>> Running on a big and stressed AIX platform and receiving lots of "CDTFATAL: 
>>>  sorry, too many clients already"
>>> and transient difficulty to log in.
>>>
>>> Happens on all PG versions (Tested 9.5,10.4,11.5)
>>>
>>> Big installation: max_connections is 1200,  shared_buffers is 2GB
>>>
>>> But .. select count(*) from pg_stat_activity   is only 66.
>>>
>>> Thanks
>>>
>>> Danny
>>>
>>>
>> Lots of idle, kept-alive clients?  Do you have a connection pooler (e.g. 
>> pg-bouncer)?
>>
>>
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


Re: too many clients already

2020-04-02 Thread Tom Lane
"Abraham, Danny"  writes:
> Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  
> sorry, too many clients already"
> and transient difficulty to log in.
> Happens on all PG versions (Tested 9.5,10.4,11.5)
> Big installation: max_connections is 1200,  shared_buffers is 2GB 
> But .. select count(*) from pg_stat_activity   is only 66.

I'd be suspicious that there are a lot of clients stuck in connection
startup (likely the authentication phase); those connections aren't going
to show in pg_stat_activity until they finish connecting.  The "ps"
suggestion Adrian gave you would not show them either, because they're
not going to say "idle".

Enabling log_connections and watching the postmaster log would help
prove or disprove that theory.

regards, tom lane




RE: Re: too many clients already

2020-04-02 Thread Abraham, Danny
Well, I guess the questions is - how do I optimize PG for a stream of very 
short life checks...
See below:

2020-04-02 11:05:37.010 CDTLOG:  connection received: host=10.64.72.157 
port=45799
2020-04-02 11:05:37.014 CDTLOG:  connection received: host=10.64.72.157 
port=45814
2020-04-02 11:05:37.014 CDTLOG:  connection received: host=10.64.72.157 
port=45813
2020-04-02 11:05:37.018 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.015 CDTLOG:  connection received: host=10.64.72.157 
port=45815
2020-04-02 11:05:37.015 CDTLOG:  connection received: host=10.64.72.157 
port=45817
2020-04-02 11:05:37.015 CDTLOG:  connection received: host=10.64.72.157 
port=45809
2020-04-02 11:05:37.015 CDTLOG:  connection received: host=10.64.72.157 
port=45818
2020-04-02 11:05:37.016 CDTLOG:  connection received: host=10.64.72.157 
port=45819
2020-04-02 11:05:37.021 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.021 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.021 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.021 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.021 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.022 CDTFATAL:  sorry, too many clients already
2020-04-02 11:05:37.022 CDTFATAL:  sorry, too many clients already
-Original Message-
From: Tom Lane  
Sent: Thursday, April 02, 2020 6:52 PM
To: Abraham, Danny 
Cc: pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already

"Abraham, Danny"  writes:
> Running on a big and stressed AIX platform and receiving lots of "CDTFATAL:  
> sorry, too many clients already"
> and transient difficulty to log in.
> Happens on all PG versions (Tested 9.5,10.4,11.5) Big installation: 
> max_connections is 1200,  shared_buffers is 2GB
> But .. select count(*) from pg_stat_activity   is only 66.

I'd be suspicious that there are a lot of clients stuck in connection startup 
(likely the authentication phase); those connections aren't going to show in 
pg_stat_activity until they finish connecting.  The "ps"
suggestion Adrian gave you would not show them either, because they're not 
going to say "idle".

Enabling log_connections and watching the postmaster log would help prove or 
disprove that theory.

regards, tom lane




Re: too many clients already

2020-04-02 Thread Tom Lane
"Abraham, Danny"  writes:
> Well, I guess the questions is - how do I optimize PG for a stream of very 
> short life checks...

You should be using a connection pooler for a load like that.
PG backends are fairly heavyweight things --- you don't want
to fire one up for just a single query, at least not when
there are many such queries per second.

I think pgbouncer and pgpool are the most widely used options,
but this is a bit outside my expertise.

regards, tom lane




RE: Re: too many clients already

2020-04-02 Thread Abraham, Danny
Agree.

I suspect that this is a mal configured pgpool - the developer thinks that the 
pool is reusing connections,
While it is, in fact, reopening them.

-Original Message-
From: Tom Lane  
Sent: Thursday, April 02, 2020 7:40 PM
To: Abraham, Danny 
Cc: pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: too many clients already

"Abraham, Danny"  writes:
> Well, I guess the questions is - how do I optimize PG for a stream of very 
> short life checks...

You should be using a connection pooler for a load like that.
PG backends are fairly heavyweight things --- you don't want to fire one up for 
just a single query, at least not when there are many such queries per second.

I think pgbouncer and pgpool are the most widely used options, but this is a 
bit outside my expertise.

regards, tom lane




Could someone please help us share the procedure to troubleshoot the locks on proc issues.

2020-04-02 Thread postgann2020 s
Hi Team,

Good Evening,

Could someone please help us share the procedure to troubleshoot the locks
on proc issues.

Environment:

 1 pgpool server (Master Pool Node) using Straming replication with load
balancing
 4 DB nodes (1Master and 3 Slaves).

 Versions:
 1. postgres: 9.5.15
 2. pgpool   : 3.9
 3. repmgr:  4.1

We are continuously facing locking issues for below procedures , due to
this the  rest of the call for these procs going into waiting state.Which
cause the DB got hung. Below are the procs  running with DB_User2 from the
application.

1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This proc
it self calling Schema1.cable_remove_validation($1,$2).
2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is also
calling from the applications

if we ran explain analyze, its taking msec only, but if we run
simultaneouly from application getting locked and waiting state.

We have ran below query for showing blocking queries and attached output in
Blocking_Queries_with_PID.csv file:

SELECT
pl.pid as blocked_pid
,psa.usename as blocked_user
,pl2.pid as blocking_pid
,psa2.usename as blocking_user
,psa.query as blocked_statement
FROM pg_catalog.pg_locks pl
JOIN pg_catalog.pg_stat_activity psa
ON pl.pid = psa.pid
JOIN pg_catalog.pg_locks pl2
JOIN pg_catalog.pg_stat_activity psa2
ON pl2.pid = psa2.pid
ON pl.transactionid = pl2.transactionid
AND pl.pid != pl2.pid
WHERE NOT pl.granted;

Output: attached output in Blocking_Queries_with_PID.csv file


The waiting connections are keep on accumulating and cause DB hung.
I have attached pg_stat_activity excel file with the user along with the
proc queries which cause waiting state.

Finds:

There are total 18 connections for DB_User2 which are running only above 2
procs, Out of that only one connection with 18732 is running proc (select *
from Schema1.duct_remove_validation($1,$2,$3,$4))from long time  and reset
of all 17 connections are in waiting state from the long time.

There are many exclusive locks on table for 18732 and other process as
well. I have attached pg_locks reference excel(Lock_Reference_For_PROC)
with highlighted pid 18732.

Could someone please suggest the procedure to troubleshoot this issue.
Please find the attachment for reference.

Thanks,
Postgann.


Blocking_PROCS_With_PIDS.csv
Description: MS-Excel spreadsheet


pg_stat_activity_output_for_clarification.xlsx
Description: MS-Excel 2007 spreadsheet


Lock_Reference_For_PROC.xlsx
Description: MS-Excel 2007 spreadsheet


Help to find-the-maximum-length-of-field-in-a-particular-column-in-all the tables

2020-04-02 Thread postgann2020 s
Hi Team,

Good Evening,

We are trying to syncing the table field size with other db tables filed
size.

We have multiple tables in multiple schemas in DB. we are tyring to findout
maximum size of each column in table for all tables and for all schemas in
DB.

How to find the maximum length of data field in a particular column in a
table and for all tables in schema.

Example column names in table: a,b,c,d,e,f,g
Example schema names in DB: A,B,C,D

Expected output:
column_name Max_size_of_column
a 10
b 20

or
column_name, Max_size_of_column, column_table, table_schema
a 10 Table1 Schema1
b 20 Table1 Schema1

I have tried below query, but not able to get desired output.

SELECT (select column_name from INFORMATION_SCHEMA.COLUMNS where
table_name='building'), select max(length(select column_name from
INFORMATION_SCHEMA.COLUMNS where table_name='building')) from from
INFORMATION_SCHEMA.COLUMNS where table_name='building'
group by column_name;

Please help us to get the desired output.

Below is the reference i have used for above one.
https://stackoverflow.com/questions/43123311/how-to-find-the-maximum-length-of-data-in-a-particular-field-in-postgresql

Regards,
Postgann.


Cstore_fdw issue.

2020-04-02 Thread Moses Mafusire
Hi,I am new to PostgreSQL, successfully installed PGSql v12.2 on my CentOS 7 
machine and I am trying to install cstore_fdw.
    1. I have managed to run this command;
sudo yum install protobuf-c-devel
    2. Per the instructions I am followin;
Once you have protobuf-c installed on your machine, you are ready to 
build cstore_fdw. For this, you need to include the pg_config directory path in 
your make command. 

    3. I have searched for pg_config directory and found 2 locations;
/usr/bin/pg_config/usr/pgsql-12/bin/pg_config    4. However 
when I try to run makefile in c_store_fdw I get this error;
        [postgres@dart-centos cstore_fdw]$ sudo PATH=/usr/bin/:$PATH make   
 [sudo] password for postgres: Makefile:50: *** PostgreSQL 9.3 to 12 is 
required to compile this extension.  Stop.
        I even tried the other path, same error;[postgres@dart-centos 
cstore_fdw]$ PATH=/usr/pgsql-12/bin/:$PATH makeMakefile:50: *** 
PostgreSQL 9.3 to 12 is required to compile this extension.  Stop.

    5. PostgreSQL Versionpostgres=# show server_version;        
server_version  12.2     (1 row)
postgres=# \! psql -Vpsql (PostgreSQL) 12.2
May you please assist me with this issue.  
Thank You,Moses

Re: Help to find-the-maximum-length-of-field-in-a-particular-column-in-all the tables

2020-04-02 Thread David G. Johnston
Please just choose a single list to email.

On Thursday, April 2, 2020, postgann2020 s  wrote:

> Hi Team,
>
> Good Evening,
>
> We are trying to syncing the table field size with other db tables filed
> size.
>

I wouldn’t bother fiddling with arbitrary sizes, just remove,them.


> We have multiple tables in multiple schemas in DB. we are tyring to
> findout maximum size of each column in table for all tables and for all
> schemas in DB.
>

You need to decide whether you want to query table data or catalog (schema)
data.


>
> I have tried below query, but not able to get desired output.
>
> SELECT (select column_name from INFORMATION_SCHEMA.COLUMNS where
> table_name='building'), select max(length(select column_name from
> INFORMATION_SCHEMA.COLUMNS where table_name='building')) from from
> INFORMATION_SCHEMA.COLUMNS where table_name='building'
> group by column_name;
>
>
If you want to dynamically determine tables and columns while still
checking the actual table data you will need to create a dynamic sql
statement to query the actual table with the content for the dynamic schema
parts being populated by a query against information_schema or the
catalog.  You’d need you something besdies pure sql (e.g., pl/pgsql) to do
this.  You may find it easier to just brute force things.

David J.


Re: Cstore_fdw issue.

2020-04-02 Thread Adrian Klaver

On 4/2/20 1:40 PM, Moses Mafusire wrote:

Hi,
I am new to PostgreSQL, successfully installed PGSql v12.2 on my CentOS 
7 machine and I am trying to install cstore_fdw.


     1. I have managed to run this command;
/sudo yum install protobuf-c-devel/
/
/
/    2. /Per the instructions I am followin;
/Once you have protobuf-c installed on your machine, you are 
ready to build cstore_fdw. For this, you need to include the pg_config 
directory path in your make command. /

/
/
// 3. I have searched for pg_config directory and found 2 locations;
 /usr/bin/pg_config
 /usr/pgsql-12/bin/pg_config
//4. However when I try to run makefile in c_store_fdw I get this error;
         [postgres@dart-centos cstore_fdw]$ sudo PATH=/usr/bin/:$PATH make
/[sudo] password for postgres: /
 Makefile:50: *** PostgreSQL 9.3 to 12 is required to compile 
this extension.  Stop.


// I even tried the other path, same error;
 [postgres@dart-centos cstore_fdw]$ 
PATH=/usr/pgsql-12/bin/:$PATH make
 Makefile:50: *** PostgreSQL 9.3 to 12 is required to compile 
this extension.  Stop.



     5. PostgreSQL Version
/postgres=# show server_version;/
         server_version
 
  12.2
      (1 row)

 postgres=# \! psql -V
 psql (PostgreSQL) 12.2

May you please assist me with this issue. //


What do you get if you run:

pg_config

from the command line.

An example from my machine:

aklaver@maura:~> pg_config
BINDIR = /usr/local/pgsql12/bin
DOCDIR = /usr/local/pgsql12/share/doc
HTMLDIR = /usr/local/pgsql12/share/doc
INCLUDEDIR = /usr/local/pgsql12/include
PKGINCLUDEDIR = /usr/local/pgsql12/include
INCLUDEDIR-SERVER = /usr/local/pgsql12/include/server
LIBDIR = /usr/local/pgsql12/lib64
PKGLIBDIR = /usr/local/pgsql12/lib64
LOCALEDIR = /usr/local/pgsql12/share/locale
MANDIR = /usr/local/pgsql12/share/man
SHAREDIR = /usr/local/pgsql12/share
SYSCONFDIR = /usr/local/pgsql12/etc
PGXS = /usr/local/pgsql12/lib64/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--with-python' '--with-openssl' '--with-libxml' 
'--with-uuid=ossp' '--prefix=/usr/local/pgsql12'

CC = gcc
CPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2
CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Werror=vla -Wendif-labels 
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing 
-fwrapv -fexcess-precision=standard -Wno-format-truncation -O2

CFLAGS_SL = -fPIC
LDFLAGS = -Wl,--as-needed 
-Wl,-rpath,'/usr/local/pgsql12/lib64',--enable-new-dtags

LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgcommon -lpgport -lpthread -lxml2 -lssl -lcrypto -lz 
-lreadline -lrt -lcrypt -ldl -lm

VERSION = PostgreSQL 12.1



Thank You,
Moses



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Could someone please help us share the procedure to troubleshoot the locks on proc issues.

2020-04-02 Thread Adrian Klaver

On 4/2/20 12:37 PM, postgann2020 s wrote:

Hi Team,

Good Evening,

Could someone please help us share the procedure to troubleshoot the 
locks on proc issues.


Environment:

  1 pgpool server (Master Pool Node) using Straming replication with 
load balancing

  4 DB nodes (1Master and 3 Slaves).

  Versions:
  1. postgres: 9.5.15
  2. pgpool   : 3.9
  3. repmgr:  4.1

We are continuously facing locking issues for below procedures , due to 
this the  rest of the call for these procs going into waiting 
state.Which cause the DB got hung. Below are the procs  running with 
DB_User2 from the application.


1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This 
proc it self calling Schema1.cable_remove_validation($1,$2).
2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is 
also calling from the applications


To figure out below we need to see what is happening in above.



if we ran explain analyze, its taking msec only, but if we run 
simultaneouly from application getting locked and waiting state.


We have ran below query for showing blocking queries and attached output 
in Blocking_Queries_with_PID.csv file:


SELECT
pl.pid as blocked_pid
,psa.usename as blocked_user
,pl2.pid as blocking_pid
,psa2.usename as blocking_user
,psa.query as blocked_statement
FROM pg_catalog.pg_locks pl
JOIN pg_catalog.pg_stat_activity psa
ON pl.pid = psa.pid
JOIN pg_catalog.pg_locks pl2
JOIN pg_catalog.pg_stat_activity psa2
ON pl2.pid = psa2.pid
ON pl.transactionid = pl2.transactionid
AND pl.pid != pl2.pid
WHERE NOT pl.granted;

Output: attached output in Blocking_Queries_with_PID.csv file


The waiting connections are keep on accumulating and cause DB hung.
I have attached pg_stat_activity excel file with the user along with the 
proc queries which cause waiting state.


Finds:

There are total 18 connections for DB_User2 which are running only above 
2 procs, Out of that only one connection with 18732 is running proc 
(select * from Schema1.duct_remove_validation($1,$2,$3,$4))from long 
time  and reset of all 17 connections are in waiting state from the long 
time.


There are many exclusive locks on table for 18732 and other process as 
well. I have attached pg_locks reference excel(Lock_Reference_For_PROC) 
with highlighted pid 18732.


Could someone please suggest the procedure to troubleshoot this issue.
Please find the attachment for reference.

Thanks,
Postgann.



--
Adrian Klaver
adrian.kla...@aklaver.com




Backing out of privilege grants rabbit hole

2020-04-02 Thread AC Gomez
Do I understand correctly that if a role was assigned countless object
privileges and you want to delete that role you have to sift through a
myriad of privilege grants in what amounts to a time consuming trial and
error exercise until you've got them all?

Or is there a single command that with just delete the role and do a
blanket grant removal at the same time?


Re: Backing out of privilege grants rabbit hole

2020-04-02 Thread Guyren Howe
https://www.postgresql.org/docs/12/sql-drop-owned.html 


> On Apr 2, 2020, at 20:34 , AC Gomez  wrote:
> 
> Do I understand correctly that if a role was assigned countless object 
> privileges and you want to delete that role you have to sift through a myriad 
> of privilege grants in what amounts to a time consuming trial and error 
> exercise until you've got them all?
> 
> Or is there a single command that with just delete the role and do a blanket 
> grant removal at the same time?



Re: Backing out of privilege grants rabbit hole

2020-04-02 Thread Guyren Howe
More fully:

REASSIGN OWNED BY doomed_role TO successor_role;
DROP OWNED BY doomed_role;
-- repeat the above commands in each database of the cluster
DROP ROLE doomed_role;

> On Apr 2, 2020, at 20:37 , Guyren Howe  wrote:
> 
> https://www.postgresql.org/docs/12/sql-drop-owned.html 
> 
> 
>> On Apr 2, 2020, at 20:34 , AC Gomez > > wrote:
>> 
>> Do I understand correctly that if a role was assigned countless object 
>> privileges and you want to delete that role you have to sift through a 
>> myriad of privilege grants in what amounts to a time consuming trial and 
>> error exercise until you've got them all?
>> 
>> Or is there a single command that with just delete the role and do a blanket 
>> grant removal at the same time?
>> 



Re: Backing out of privilege grants rabbit hole

2020-04-02 Thread AC Gomez
Thanks for the quick response. The problem is, in most cases the owner is
not the grantee. So if a role, let's say a temp employee, gets grants, then
leaves, I can't do a drop owned because that temp never owned those
objects, he just was granted access. Is there a "drop granted" kind of
thing?

On Thu, Apr 2, 2020, 11:37 PM Guyren Howe  wrote:

> https://www.postgresql.org/docs/12/sql-drop-owned.html
>
> On Apr 2, 2020, at 20:34 , AC Gomez  wrote:
>
> Do I understand correctly that if a role was assigned countless object
> privileges and you want to delete that role you have to sift through a
> myriad of privilege grants in what amounts to a time consuming trial and
> error exercise until you've got them all?
>
> Or is there a single command that with just delete the role and do a
> blanket grant removal at the same time?
>
>
>


Improve COPY performance into table with indexes.

2020-04-02 Thread James Brauman
I am using the COPY command to insert 10 million rows from a CSV file
into a database table and I am finding the performance is
unacceptable. When the COPY command is executed the disk I/O is
extremely high which leads to degraded query performance for other
queries being executed on the database.

I have tried removing the indexes from the database table and this
dramatically improved performance (both reducing the execution time
and reducing disk I/O).

The constraints I am working with are:
 -  Indexes must be in place to ensure that other queries that are
executed on the table while the COPY command is executing have
acceptable performance.
 - The table should not be locked for reads during the COPY (it is
fine to lock for writes).

I don't know how COPY is implemented, but it seems that the indexes
are updated as data is inserted into the table. Ideally I would like
to delay updating the indexes until the COPY command has completed and
just update the indexes a single time, I think this would lead to much
better performance. Is this possible?

Another idea that I had was to start a transaction, drop the indexes,
run the COPY command and then recreate the indexes. The problem with
this approach is that DROP INDEX acquires an exclusive lock on the
table, which is not acceptable in this system (otherwise other queries
won't be able to execute). I tried using DROP INDEX CONCURRENTLY, but
that cannot be executed inside a transaction.

What are my options in this scenario?

Thanks




Re: Backing out of privilege grants rabbit hole

2020-04-02 Thread raf
It's probably more sensible to grant permissions to roles that
represent groups, and have roles for individual users that
inherit the permissions of the group roles. Then you don't
need to revoke the permissions just because an individiual
has left.

cheers,
raf

AC Gomez wrote:

> Thanks for the quick response. The problem is, in most cases the owner is
> not the grantee. So if a role, let's say a temp employee, gets grants, then
> leaves, I can't do a drop owned because that temp never owned those
> objects, he just was granted access. Is there a "drop granted" kind of
> thing?
> 
> On Thu, Apr 2, 2020, 11:37 PM Guyren Howe  wrote:
> 
> > https://www.postgresql.org/docs/12/sql-drop-owned.html
> >
> > On Apr 2, 2020, at 20:34 , AC Gomez  wrote:
> >
> > Do I understand correctly that if a role was assigned countless object
> > privileges and you want to delete that role you have to sift through a
> > myriad of privilege grants in what amounts to a time consuming trial and
> > error exercise until you've got them all?
> >
> > Or is there a single command that with just delete the role and do a
> > blanket grant removal at the same time?




Re: Backing out of privilege grants rabbit hole

2020-04-02 Thread AC Gomez
Granted. But we are where we are, so I'm assuming this is going to be hand
to hand combat.

On Fri, Apr 3, 2020, 12:57 AM raf  wrote:

> It's probably more sensible to grant permissions to roles that
> represent groups, and have roles for individual users that
> inherit the permissions of the group roles. Then you don't
> need to revoke the permissions just because an individiual
> has left.
>
> cheers,
> raf
>
> AC Gomez wrote:
>
> > Thanks for the quick response. The problem is, in most cases the owner is
> > not the grantee. So if a role, let's say a temp employee, gets grants,
> then
> > leaves, I can't do a drop owned because that temp never owned those
> > objects, he just was granted access. Is there a "drop granted" kind of
> > thing?
> >
> > On Thu, Apr 2, 2020, 11:37 PM Guyren Howe  wrote:
> >
> > > https://www.postgresql.org/docs/12/sql-drop-owned.html
> > >
> > > On Apr 2, 2020, at 20:34 , AC Gomez  wrote:
> > >
> > > Do I understand correctly that if a role was assigned countless object
> > > privileges and you want to delete that role you have to sift through a
> > > myriad of privilege grants in what amounts to a time consuming trial
> and
> > > error exercise until you've got them all?
> > >
> > > Or is there a single command that with just delete the role and do a
> > > blanket grant removal at the same time?
>
>
>


Re: Could someone please help us share the procedure to troubleshoot the locks on proc issues.

2020-04-02 Thread postgann2020 s
Thanks Adrian, will share the details.

On Fri, Apr 3, 2020 at 4:30 AM Adrian Klaver 
wrote:

> On 4/2/20 12:37 PM, postgann2020 s wrote:
> > Hi Team,
> >
> > Good Evening,
> >
> > Could someone please help us share the procedure to troubleshoot the
> > locks on proc issues.
> >
> > Environment:
> > 
> >   1 pgpool server (Master Pool Node) using Straming replication with
> > load balancing
> >   4 DB nodes (1Master and 3 Slaves).
> >
> >   Versions:
> >   1. postgres: 9.5.15
> >   2. pgpool   : 3.9
> >   3. repmgr:  4.1
> >
> > We are continuously facing locking issues for below procedures , due to
> > this the  rest of the call for these procs going into waiting
> > state.Which cause the DB got hung. Below are the procs  running with
> > DB_User2 from the application.
> >
> > 1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This
> > proc it self calling Schema1.cable_remove_validation($1,$2).
> > 2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is
> > also calling from the applications
>
> To figure out below we need to see what is happening in above.
>
> >
> > if we ran explain analyze, its taking msec only, but if we run
> > simultaneouly from application getting locked and waiting state.
> >
> > We have ran below query for showing blocking queries and attached output
> > in Blocking_Queries_with_PID.csv file:
> >
> > SELECT
> > pl.pid as blocked_pid
> > ,psa.usename as blocked_user
> > ,pl2.pid as blocking_pid
> > ,psa2.usename as blocking_user
> > ,psa.query as blocked_statement
> > FROM pg_catalog.pg_locks pl
> > JOIN pg_catalog.pg_stat_activity psa
> > ON pl.pid = psa.pid
> > JOIN pg_catalog.pg_locks pl2
> > JOIN pg_catalog.pg_stat_activity psa2
> > ON pl2.pid = psa2.pid
> > ON pl.transactionid = pl2.transactionid
> > AND pl.pid != pl2.pid
> > WHERE NOT pl.granted;
> >
> > Output: attached output in Blocking_Queries_with_PID.csv file
> >
> >
> > The waiting connections are keep on accumulating and cause DB hung.
> > I have attached pg_stat_activity excel file with the user along with the
> > proc queries which cause waiting state.
> >
> > Finds:
> >
> > There are total 18 connections for DB_User2 which are running only above
> > 2 procs, Out of that only one connection with 18732 is running proc
> > (select * from Schema1.duct_remove_validation($1,$2,$3,$4))from long
> > time  and reset of all 17 connections are in waiting state from the long
> > time.
> >
> > There are many exclusive locks on table for 18732 and other process as
> > well. I have attached pg_locks reference excel(Lock_Reference_For_PROC)
> > with highlighted pid 18732.
> >
> > Could someone please suggest the procedure to troubleshoot this issue.
> > Please find the attachment for reference.
> >
> > Thanks,
> > Postgann.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Andrus

Hi!


Simply replace
  SET col = unnest(array_value)
with

  SET col = array_value[1]


I tried

update temprid set
   ContactFirstName =xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text

But got error

   ERROR:  syntax error at or near "["
   LINE 3: .../BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text

How to fix ?

Posted also in 


https://stackoverflow.com/questions/60993975/fixing-set-returning-functions-are-not-allowed-in-update

Andrus.