Barman versus pgBackRest

2018-03-06 Thread Thomas Poty
Hello Community,

I hesitate to use barman or pgBackRest.  I have found a lot of common
points between them and a few differences:

About pgBarman, I like :
- be able restore on a remote server from the backup server
- use replication slots for backingup wal on the backup server.

About pgBackRest, I like :

- real differential backup.
- lots of options
- option for backingup if PostgreSQL is already in backup mode


I would like to have :
- advices or feedbach about using pgBackrest or barman.
- pros and cons of these solutions
- differences that I would not have seen.

Thank you

Thomas


primary key and unique index

2018-03-23 Thread Thomas Poty
Hi all,

I am migrating fromMySQL to Postgresql 9.6.

In MySQL a  "show create table" gives me :
 ...
  PRIMARY KEY (`ID`,`CountryCode`,`LanguageCode`),
  UNIQUE KEY `unique_my table_4` (`ID`,`CountryCode`,`LanguageCode`),
...

So, In PostgreSQL, does it make sense to create a primary key AND a unique
index based on the same columns?
Is PostgreSQL smart enough to use the unique index created for the primary
key.

I know PostgreSQL can be based on a unique index to create a primary key
but I also know it is possible to create several indexes on the same
columns with the same order.

Thanks

Thomas


Re: primary key and unique index

2018-03-24 Thread Thomas Poty
Many thanks Phil for complementary information .

Le sam. 24 mars 2018 à 09:53, HORDER Phil 
a écrit :

> Some databases will create a unique index for you when you create a
> primary key.
>
> Oracle will create one, but only if you haven’t already done that.
>
>
>
> Postgres will ALWAYS create a unique index based on the primary key – so
> you should never do that as well, or the db will have to maintain two
> identical indexes.
>
>
>
> (When we migrated our db design from Oracle we ended up with lots of
> duplicate indexes).
>
>
>
> SQL> Select * from pg_indexes order by schemaname, tablename;
>
>
>
> Phil Horder
>
> Database Mechanic
>
>
>
> Thales
>
> Land and Air Systems
>
> Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK
>
> www.thalesgroup.com/uk
>
>
>
> Tel: +44 (0) 1963 37 2041
>
> Email: phil.hor...@uk.thalesgroup.com
>
>
>
> *Please consider the environment before printing a hard copy of this
> email.*
>
>
>
> The information contained in this e-mail is confidential. It is intended
> only for the stated addressee(s) and access to it by any other person is
> unauthorised. If you are not an addressee, you must not disclose, copy,
> circulate or in any other way use or rely on the information contained in
> this e-mail. Such unauthorised use may be unlawful. If you have received
> this e-mail in error, please inform us immediately on +44 (0)1963 370511
> and delete it and all copies from your system.
>
>
>
> *Thales UK Limited. A company registered in England and Wales. Registered
> Office: 350 Longwater Avenue, Green Park, Reading, RG2 6GF. Registered
> Number: 868273*
>
>
>
> *From:* Thomas Poty [mailto:thomas.p...@gmail.com]
> *Sent:* 23 March 2018 07:56
> *To:* pgsql-general@lists.postgresql.org
> *Subject:* primary key and unique index
>
>
>
> Hi all,
>
> I am migrating fromMySQL to Postgresql 9.6.
>
> In MySQL a  "show create table" gives me :
>  ...
>   PRIMARY KEY (`ID`,`CountryCode`,`LanguageCode`),
>   UNIQUE KEY `unique_my table_4` (`ID`,`CountryCode`,`LanguageCode`),
> ...
>
> So, In PostgreSQL, does it make sense to create a primary key AND a unique
> index based on the same columns?
>
> Is PostgreSQL smart enough to use the unique index created for the primary
> key.
>
> I know PostgreSQL can be based on a unique index to create a primary key
> but I also know it is possible to create several indexes on the same
> columns with the same order.
>
> Thanks
>
> Thomas
>


dealing with lock

2018-04-06 Thread Thomas Poty
Hello All,

Here is a bit of context : we are migrating from MySQL to PostgreSQL and we
have about 1000 tables. Some tables are quite small but some others are
very large. The service provided to our clients relies on a high
avaiability with a minimum down time due to any legal deadlines.

So, lets imagine :
in Transaction 1 : I am querying Table A (select)
in Transaction 2 : I am trying to alter Table A ( due to our product
evolution)
in Transaction 3 : I am want to query Table1 (select)

in MySQL : Transaction 1 retrieve data in Table A.
Transaction 2 : is trying to alter Table A but it is blocked by Transaction
1
Transaction 3 : Transaction 1 retrieves data in Table A ( Retreiving data
is possible until Transaction 2 commit)

In PostgreSQL, it is a bit different : Transaction 1 retrieve data in Table
A.
Transaction 2 : is trying to alter Table A but it is blocked by Transaction
1
Transaction 3 : Transaction 3 cannot retrieve data because  Transaction 2
did not terminate its transaction.

So, with MySQL, the application is able to keep working with the table
until the alter table completed.

With PostgreSQL, the application will probably be blocked (until having the
lock on this table).
If I understand, if the alter table takes a long time (several hours) to
execute, clients will be blocked during several hours.

How do you deal with this problem? Maybe I missed something ?

Thank you all for  yours answers.


Re: dealing with lock

2018-04-06 Thread Thomas Poty
Thank you Laurenz !


We will certainly have to change our release management.

Is there a way to identify the list of statements that have to rewrite the
table.

If I am right, at least these statements need to do this :
- create a unique index
- add a column with a default value

Regards,

 Thomas


2018-04-06 17:11 GMT+02:00 Laurenz Albe :

> On Fri, 2018-04-06 at 16:58 +0200, Thomas Poty wrote:
> > Here is a bit of context : we are migrating from MySQL to PostgreSQL and
> we have about 1000 tables.
> > Some tables are quite small but some others are very large. The service
> provided to our clients
> > relies on a high avaiability with a minimum down time due to any legal
> deadlines.
> >
> > So, lets imagine :
> > in Transaction 1 : I am querying Table A (select)
> > in Transaction 2 : I am trying to alter Table A ( due to our product
> evolution)
> > in Transaction 3 : I am want to query Table1 (select)
> >
> > in MySQL : Transaction 1 retrieve data in Table A.
> > Transaction 2 : is trying to alter Table A but it is blocked by
> Transaction 1
> > Transaction 3 : Transaction 1 retrieves data in Table A ( Retreiving
> data is possible until Transaction 2 commit)
> >
> > In PostgreSQL, it is a bit different : Transaction 1 retrieve data in
> Table A.
> > Transaction 2 : is trying to alter Table A but it is blocked by
> Transaction 1
> > Transaction 3 : Transaction 3 cannot retrieve data because  Transaction
> 2 did not terminate its transaction.
> >
> > So, with MySQL, the application is able to keep working with the table
> until the alter table completed.
> >
> > With PostgreSQL, the application will probably be blocked (until having
> the lock on this table).
> > If I understand, if the alter table takes a long time (several hours) to
> execute, clients will be blocked during several hours.
> >
> > How do you deal with this problem? Maybe I missed something ?
>
> The solution is to avoid ALTER TABLE statements that have to rewrite
> the table outside of maintenance windows.
>
> If your transactions are short, as they should be, it should not be
> a big deal to add or drop a column, for example.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>


Re: dealing with lock

2018-04-07 Thread Thomas Poty
Thank you Laurenz!

Regards
Thomas

Le sam. 7 avr. 2018 à 00:02, Adrian Klaver  a
écrit :

> On 04/06/2018 12:09 PM, Thomas Poty wrote:
> > Thank you Laurenz !
> >
> >
> > We will certainly have to change our release management.
> >
> > Is there a way to identify the list of statements that have to rewrite
> > the table.
>
> https://www.postgresql.org/docs/10/static/sql-altertable.html
>
> Notes
>
> "Adding a column with a DEFAULT clause or changing the type of an
> existing column will require the entire table and its indexes to be
> rewritten. As an exception when changing the type of an existing column,
> if the USING clause does not change the column contents and the old type
> is either binary coercible to the new type or an unconstrained domain
> over the new type, a table rewrite is not needed; but any indexes on the
> affected columns must still be rebuilt. Adding or removing a system oid
> column also requires rewriting the entire table. Table and/or index
> rebuilds may take a significant amount of time for a large table; and
> will temporarily require as much as double the disk space."
>
>
> For the more general case of modifying a table and the locks it takes,
> search the above link for lock to see what locks are taken instead of
> the default of ACCESS EXCLUSIVE.
>
> For what the locks mean see:
>
> https://www.postgresql.org/docs/10/static/explicit-locking.html
>
> >
> > If I am right, at least these statements need to do this :
> > - create a unique index
> > - add a column with a default value
>
>
>
> >
> > Regards,
> >
> >   Thomas
> >
> >
> > 2018-04-06 17:11 GMT+02:00 Laurenz Albe  > <mailto:laurenz.a...@cybertec.at>>:
> >
> > On Fri, 2018-04-06 at 16:58 +0200, Thomas Poty wrote:
> > > Here is a bit of context : we are migrating from MySQL to
> PostgreSQL and we have about 1000 tables.
> > > Some tables are quite small but some others are very large. The
> service provided to our clients
> > > relies on a high avaiability with a minimum down time due to any
> legal deadlines.
> > >
> > > So, lets imagine :
> > > in Transaction 1 : I am querying Table A (select)
> > > in Transaction 2 : I am trying to alter Table A ( due to our
> product evolution)
> > > in Transaction 3 : I am want to query Table1 (select)
> > >
> > > in MySQL : Transaction 1 retrieve data in Table A.
> > > Transaction 2 : is trying to alter Table A but it is blocked by
> Transaction 1
> > > Transaction 3 : Transaction 1 retrieves data in Table A (
> Retreiving data is possible until Transaction 2 commit)
> > >
> > > In PostgreSQL, it is a bit different : Transaction 1 retrieve data
> in Table A.
> > > Transaction 2 : is trying to alter Table A but it is blocked by
> Transaction 1
> > > Transaction 3 : Transaction 3 cannot retrieve data because
> Transaction 2 did not terminate its transaction.
> > >
> > > So, with MySQL, the application is able to keep working with the
> table until the alter table completed.
> > >
> > > With PostgreSQL, the application will probably be blocked (until
> having the lock on this table).
> > > If I understand, if the alter table takes a long time (several
> hours) to execute, clients will be blocked during several hours.
> > >
> > > How do you deal with this problem? Maybe I missed something ?
> >
> > The solution is to avoid ALTER TABLE statements that have to rewrite
> > the table outside of maintenance windows.
> >
> > If your transactions are short, as they should be, it should not be
> > a big deal to add or drop a column, for example.
> >
> > Yours,
> > Laurenz Albe
> > --
> > Cybertec | https://www.cybertec-postgresql.com
> > <https://www.cybertec-postgresql.com>
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: dealing with lock

2018-04-07 Thread Thomas Poty
Thank you Thomas

Regards
Thomas

Le sam. 7 avr. 2018 à 08:01, Thomas Kellerer  a écrit :

> Adrian Klaver schrieb am 07.04.2018 um 00:02:
> >> Is there a way to identify the list of statements that have to rewrite
> the table.
> >
> > https://www.postgresql.org/docs/10/static/sql-altertable.html
> >
> > Notes
> >
> > "Adding a column with a DEFAULT clause or changing the type of an
> existing column will require the entire table and its indexes to be
> rewritten.
> Note that this will change with Postgres 11
>
>
> https://www.depesz.com/2018/04/04/waiting-for-postgresql-11-fast-alter-table-add-column-with-a-non-null-default/
>
>
>
>


algo for canceling a deadlocked transaction

2018-04-09 Thread Thomas Poty
Good afternoon,

My question is : In case of a deadlock between 2 transaction,  how to know
which transaction will be canceled? Is it predictable?

I have tried to look into sources but i have found nothing. ( probably, i
am the problem)

Regards,


Thomas


Re: algo for canceling a deadlocked transaction

2018-04-09 Thread Thomas Poty
Hello Stephen,

> The short answer is "it's whichever one detected the deadlock."  The
> deadlock timeout fires after a lock has been held that long and if a
> deadlock is detected then the process detecting it will be canceled.

ok, and long  answer ? is it random?


> I'd strongly recommend reviewing your application and addressing
> deadlocks by changing how the application acquires locks to be
> consistent and to avoid lock escalation instead of worrying about how to
> predict a deadlock- a properly designed and written application
> shouldn't be causing deadlocks to happen in the first place.

I didn't want to predict the deadlock happening. I only want to know if it
is predictable to know which transaction will be canceled.

Thank you

2018-04-09 15:51 GMT+02:00 Stephen Frost :

> Greetings,
>
> * Thomas Poty (thomas.p...@gmail.com) wrote:
> > My question is : In case of a deadlock between 2 transaction,  how to
> know
> > which transaction will be canceled? Is it predictable?
>
> The short answer is "it's whichever one detected the deadlock."  The
> deadlock timeout fires after a lock has been held that long and if a
> deadlock is detected then the process detecting it will be canceled.
>
> I'd strongly recommend reviewing your application and addressing
> deadlocks by changing how the application acquires locks to be
> consistent and to avoid lock escalation instead of worrying about how to
> predict a deadlock- a properly designed and written application
> shouldn't be causing deadlocks to happen in the first place.
>
> Thanks!
>
> Stephen
>


Re: algo for canceling a deadlocked transaction

2018-04-09 Thread Thomas Poty
Hello Tom,

Thank you for the clarification!

Regards,
 Thomas

Le lun. 9 avr. 2018 à 17:04, Tom Lane  a écrit :

> Christophe Pettus  writes:
> >> On Apr 9, 2018, at 07:33, Thomas Poty  wrote:
> >> ok, and long  answer ? is it random?
>
> > It's not literally random, but from the application point of view, it's
> not predictable.  For example, it's not always the one that opened first,
> or any other consistent measure.
>
> It's whichever one runs the deadlock detector first after the circular
> wait becomes established.  For instance:
>
> * Process A takes lock L1
>
> * Process B takes lock L2
>
> * Process A tries to take lock L2, blocks
>
> * Process B tries to take lock L1, blocks (now a deadlock exists)
>
> Process A will run the deadlock detector one deadlock_timeout after
> blocking.  If that happens before B has blocked, then A will see
> no deadlock and will go back to waiting.  In that case, when B's
> own deadlock_timeout expires and it runs the deadlock detector,
> it will see the deadlock and fix it by canceling its own wait.
> On the other hand, if B started to wait less than one deadlock_timeout
> after A did, then A will be first to observe the deadlock and it will
> cancel itself, not B.
>
> So you can't predict it unless you have a lot of knowledge about
> the timing of events.  You could probably make it more predictable
> by making deadlock_timeout either very short or very long, but
> neither of those are desirable things to do.
>
> regards, tom lane
>


Re: Barman versus pgBackRest

2018-04-11 Thread Thomas Poty
Hello David,

Sorry for answering only now but I just saw you answer only now.

> To be clear, I'm the original author and primary maintainer of
pgBackRest.

I am very happy to see guys like you to take time to answer me. Thank you

> This a good feature, and one that has been requested for pgBackRest. You
> can do this fairly trivially with ssh, however, so it generally hasn't
> been a big deal for people.  Is there a particular reason you need this
> feature?

The reason is probably a psychologic matter but I like the idea of a unique
connecting point to restore DBs of different location.

I am very impatient to see "replication slot" support and "remote restore"
feature added.

Thank you for your time,

Regards,

Thomas



2018-03-09 15:56 GMT+01:00 David Steele :

> Hi Thomas,
>
> On 3/6/18 2:53 PM, Thomas Poty wrote:
> > Hello Community,
> >
> > I hesitate to use barman or pgBackRest.  I have found a lot of common
> > points between them and a few differences:
>
> To be clear, I'm the original author and primary maintainer of
> pgBackRest.  I'll let the Barman folks speak to their strengths, but I'm
> happy to address your points below.
>
> > About pgBarman, I like :
> > - be able restore on a remote server from the backup server
>
> This a good feature, and one that has been requested for pgBackRest. You
> can do this fairly trivially with ssh, however, so it generally hasn't
> been a big deal for people.  Is there a particular reason you need this
> feature?
>
> > - use replication slots for backingup wal on the backup server.
>
> Another good feature.  We have not added it yet because pgBackRest was
> originally written for very high-volume clusters (100K+ WAL per day) and
> our parallel async feature answers that need much better.  We recommend
> a replicated standby for more update-to-date data.
>
> Even so, we are looking at adding support for replication slots to
> pgBackRest.  We are considering a hybrid scheme that will use
> replication to keep the WAL archive as up to date as possible, while
> doing bulk transfer with archive_command.
>
> > About pgBackRest, I like :
> >
> > - real differential backup.
> > - lots of options
> > - option for backingup if PostgreSQL is already in backup mode
> >
> >
> > I would like to have :
> > - advices or feedbach about using pgBackrest or barman.
> > - pros and cons of these solutions
>
> I'll stick with some of the major pgBackRest pros:
>
> - Parallel backup including compression and checksums
> - Encryption
> - S3 support
> - Parallel archive
> - Delta restore
> - Page checksum validation
> - Backup resume
>
> More about features here: https://pgbackrest.org
>
> > - differences that I would not have seen.
>
> pgBackRest is used in some very demanding environments and we are
> constantly answering the needs of our users with features and
> performance improvements, e.g. the enormous improvements to archive-push
> speed in the 2.0 release.
>
> I'd be happy to answer any specific questions you have about pgBackRest.
>
> Regards,
> --
> -David
> da...@pgmasters.net
>


Re: Using the public schema

2018-04-23 Thread Thomas Poty
Hi charlin,
I invite you to  read this doc it explains very well  the security issue
with the public schéma :
https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058:_Protect_Your_Search_Path

The main problem with the public schéma is the default privileges (execute)
granted to all users able to connect.

Personally, each time i create a database, i remove the public schema

Regards
Thomas

Le lun. 23 avr. 2018 à 17:00, Charlin Barak  a
écrit :

> Hi,
> We will be developing three new applications in PostgreSQL, each having
> its own database instance running on different hosts. We will only have one
> schema per Postgres instance. The data is read-write only by one
> application/schema in the DB instance and the data is published to other
> applications via API calls.
>
> In such a standalone database configuration, are there any security
> implications or any downsides to creating the application in the public
> schema?
>
> Thanks.
>
>
>


Re: Using the public schema

2018-04-23 Thread Thomas Poty
Erratum :

The main problem with the public schéma is the default privileges (*create*)
granted to all users able to connect.


Le lun. 23 avr. 2018 à 21:16, Thomas Poty  a écrit :

> Hi charlin,
> I invite you to  read this doc it explains very well  the security issue
> with the public schéma :
> https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058:_Protect_Your_Search_Path
>
> The main problem with the public schéma is the default privileges
> (execute) granted to all users able to connect.
>
> Personally, each time i create a database, i remove the public schema
>
> Regards
> Thomas
>
> Le lun. 23 avr. 2018 à 17:00, Charlin Barak  a
> écrit :
>
>> Hi,
>> We will be developing three new applications in PostgreSQL, each having
>> its own database instance running on different hosts. We will only have one
>> schema per Postgres instance. The data is read-write only by one
>> application/schema in the DB instance and the data is published to other
>> applications via API calls.
>>
>> In such a standalone database configuration, are there any security
>> implications or any downsides to creating the application in the public
>> schema?
>>
>> Thanks.
>>
>>
>>


Issue on public schéma with Pg_restore

2018-04-25 Thread Thomas Poty
Hello,
 Here is the context :
I have a db db1 with a schéma public on cluster C1.
This schéma doesn't have any privileges on public role.
I have a dump of this db.
On an other cluster C2, the template1 doesn't contain schema public.
I have restored db1  on cluster C2 and i saw public role had the privilege
create on the schéma public.

I cannot explain this

Thank you.

Thomas?


Re: Issue on public schéma with Pg_restore

2018-04-26 Thread Thomas Poty
lic
role on schema public  if it found it. Is that correct?

Regards

Thomas

2018-04-25 20:24 GMT+02:00 Adrian Klaver :

> On 04/25/2018 11:07 AM, Thomas Poty wrote:
>
>> Hello,
>>   Here is the context :
>>
>
> Postgres version?
>
> I have a db db1 with a schéma public on cluster C1.
>> This schéma doesn't have any privileges on public role.
>> I have a dump of this db.
>>
>
> What was the dump command?
>
> On an other cluster C2, the template1 doesn't contain schema public.
>> I have restored db1  on cluster C2 and i saw public role had the
>> privilege create on the schéma public.
>>
>
> What was the restore command?
>
>
>
>> I cannot explain this
>>
>> Thank you.
>>
>> Thomas?
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Issue on public schéma with Pg_restore

2018-04-26 Thread Thomas Poty
I have run this command (same of previous one without -d and with -f
argument :

/bin/pg_restore --username=backup --host=VM38 --port=5432  --no-password
--disable-triggers --verbose --clean --create --if-exists -f
/tmp/thomasp.log /mnt/backupPostgreSQL/serverco
nfig_prod/backup_in_progress/serverconfig_prod_thomasp.sql

Here is partial content of the file /tmp/thomasp.log
I don't see any "create schema public"... :

BUT I see  create database ... TEMPLATE=*template0*



*CREATE DATABASE "serverconfig" WITH TEMPLATE = template0 ENCODING = 'UTF8'
LC_COLLATE = 'en_GB.UTF-8' LC_CTYPE = 'en_GB.UTF-8';ALTER DATABASE
serverconfig OWNER TO serverconfig_owner;\connect serverconfig*


At this time for my issue, I see  solutions :
- never use schema public (so rename it)=>
   dropping it after a create database or a pg_restore with --create
- use a postscript to revoke all privileges  from public


According to me, pg_dump/pg_restore could add new features :
- An other solution could be a new argument in order to specify a template
(like --template) only relevent with --create of pg_restore
- An other solution could be :
  with pg_dump : include create statement for all schema
  with --create of pg_restore, have this behavior  : create the
database, drop all in the new db , and create all schema included in the
dump.
- An other solution could be : a new argument in order to specify an other
template than template0 (like --template) only relevent with --create of
pg_restore.

Is it possible to discuss about these potentiel features? with whom? Is
there a specific canal?

i would like to thank Adrian.

Regards Thomas

2018-04-26 9:03 GMT+02:00 Thomas Poty :

> Hi,
>
> *About version :*
>
> This is the same on both server
>
> * + source  server  :*
>
>
> [[local]] thomasproot@serverconfig=# select version();
> ┌───
> ───┐
> │
> version  │
> ├───
> ───┤
> │ PostgreSQL 9.6.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-16), 64-bit │
> └───
> ───┘
> (1 row)
>
> Time: 0.183 ms
>
> *+ target server*
>
> [[local]] thomasproot@postgres=# select version();
> ┌───
> ───┐
> │
> version  │
> ├───
> ───┤
> │ PostgreSQL 9.6.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-16), 64-bit │
> └───
> ───┘
> (1 row)
>
> Time: 4.711 ms
>
>
> *+ pg_dump :*
>
> *Command :*
> /usr/pgsql-9.6/bin/pg_dump --username=thomasp --host=VM65 --port=5432
> --quote-all-identifiers --blobs --format=c --compress=0 --verbose
> serverconfig > /mnt/backupPostgreSQL/serverconfig_prod/backup_in_
> progress/serverconfig_prod_thomasp.sql
>
> *version :*
> pg_dump (PostgreSQL) 9.6.7
>
>
> *+ pg_restore :*
>
> *command :*
> /bin/pg_restore --username=backup --host=VM38 --port=5432
> --dbname=postgres --no-password --disable-triggers --verbose --clean
> --create --if-exists /mnt/backupPostgreSQL/serverconfig_prod/backup_in_
> progress/serverconfig_prod_thomasp.sql
>
> *version :*
> pg_restore (PostgreSQL) 9.6.7
>
>
>
>
> *About privileges: *
> *+ source db :*
>
> [[local]] thomasproot@serverconfig=# \dn+
>List of schemas
> ┌──┬┬───
> ───┬┐
> │   Name│   Owner │
>  Access privileges
> │  Description│
> ├──┼┼───
> ───┼┤
> │ public  │ serverconfig_owner│
> serverconfig_owner=UC/serverconfig_owner↵   │
> standard public schema   │
> │   │  │
> toolboxsysadmin=U/serverconfig_owner
> │  

Re: Issue on public schéma with Pg_restore

2018-04-26 Thread Thomas Poty
Hi,

About the minor version, we will plan the upgrade soon .

So how did you revoke the privileges for PUBLIC in the above? [db source]
=> After creation of the database, I executed:
revoke all on schema public from public;

Thanks for the 'no-op' .

Thomas



2018-04-26 16:28 GMT+02:00 Adrian Klaver :

> On 04/26/2018 12:03 AM, Thomas Poty wrote:
>
>> Hi,
>>
>
> Comments in line below.
>
>
>> *About version :*
>>
>> This is the same on both server
>>
>> _ + source  server  :_
>>
>>
>> [[local]] thomasproot@serverconfig=# select version();
>> ┌───
>> ───┐
>> │
>>  version  │
>> ├───
>> ───┤
>> │ PostgreSQL 9.6.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
>> 20150623 (Red Hat 4.8.5-16), 64-bit │
>> └───
>> ───┘
>> (1 row)
>>
>> Time: 0.183 ms
>>
>> _+ target server_
>>
>> [[local]] thomasproot@postgres=# select version();
>> ┌───
>> ───┐
>> │
>>  version  │
>> ├───
>> ───┤
>> │ PostgreSQL 9.6.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
>> 20150623 (Red Hat 4.8.5-16), 64-bit │
>> └───
>> ───┘
>> (1 row)
>>
>> Time: 4.711 ms
>>
>
> The latest 9.6 version is 9.6.8 and it has changes to deal with this:
>
> https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058:_
> Protect_Your_Search_Path
>
> Not sure if it would cover your issues, but worth the ugrade anyway.
>
>
>>
>>
>
>> _+ pg_restore :_
>>
>> /command :/
>> /bin/pg_restore --username=backup --host=VM38 --port=5432
>> --dbname=postgres --no-password --disable-triggers --verbose --clean
>> --create --if-exists /mnt/backupPostgreSQL/serverco
>> nfig_prod/backup_in_progress/serverconfig_prod_thomasp.sql
>>
>
> FYI, --disable-triggers in this context is a no-op.
>
>
>> /version :/
>> pg_restore (PostgreSQL) 9.6.7
>>
>>
>> *About privileges:
>>
>> *
>> _+ source db :_
>>
>> [[local]] thomasproot@serverconfig=# \dn+
>> List of schemas
>> ┌──┬┬───
>> ───┬┐
>> │   Name│   Owner │
>>Access
>> privileges │  Description│
>> ├──┼┼───
>> ───┼┤
>> │ public  │ serverconfig_owner│
>> serverconfig_owner=UC/serverconfig_owner↵   │
>> standard public schema   │
>> │   │  │
>> toolboxsysadmin=U/serverconfig_owner
>>  │  │
>> │ public_h  │ serverconfig_owner │
>> serverconfig_owner=UC/serverconfig_owner↵
>>  │  │
>> │   │  │
>> toolboxsysadmin=U/serverconfig_owner
>>  │  │
>> └──┴┴───
>> ───┴┘
>> (2 rows)
>>
>>
> So how did you revoke the privileges for PUBLIC in the above?
>
>
>> _+ target db _
>>
>> /schema of template1 :/
>>
>> thomasproot@template1=# \dn+
>>   List of schemas
>> ┌──┬───┬───┬─┐
>> │ Name │ Owner  │ Access privileges│ Description
>>│
>> ├──┼───┼───┼─┤
>> └──┴───┴───┴─┘
>> (0 rows)
&g

Re: Schema/Data conversion opensource tools from MySQL to PostgreSQL

2018-06-26 Thread Thomas Poty
Hi Chiranjeevi,

I know my answer doesn't answer your request but we have just converted a
database from mysql to postgresql with about 1000 tables.
We had to convert some functions present in mysql but not in postgresql
Also we had to convert queries.
I converted all of that manually, table by table, column by column,etc

We developped a program that compare structure , a program that migrate
data and a program that compare data.
Also we had to convert queries.

All of that to say I don't think you can convert only by running a tool
like that.

I also think it is a bit difficult to suggest a tool without knowing
exactly what is needed.

Regards
Thomas

Le mar. 26 juin 2018 à 08:25, Arnaud L.  a écrit :

> Le 25-06-18 à 18:47, chiru r a écrit :
> > Please suggest Schema/Data conversion opensource tools from MySQL to
> > PostgreSQL.
>
> Hi.
> I used this php script which did a pretty good job :
> https://github.com/AnatolyUss/FromMySqlToPostgreSql
>
> --
> Regards
>
>


Re: Problem Postgres

2018-06-26 Thread Thomas Poty
Hello,
Can you try with a superuser?

Le mar. 26 juin 2018 à 10:06, Emanuele Musella  a
écrit :

> Good morning,
>
> we have the following error:
>
> 2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere
> informazioni sul file "base/16395/19739338": Permission denied
> 2018-06-26 09:48:44 CEST ISTRUZIONE:  select
> p.datname,pg_database_size(p.datname) from pg_database p
>
> This error repetly every 20 minutes.
>
> Can you help me?
>
> Regards
>
> Emanuele Musella
>


Re: Barman versus pgBackRest

2018-09-04 Thread Thomas Poty
> Do you just change the IP address of the "restore target"?
Do you expect a typical restore command? If yes, here is a small bash
script I use for check restore ...

barmanBackupID=""
barmanBackupServer=$1
if [ 1 -eq $# ]; then
echo ${barmanBackupServer}
barmanBackupID=$(barman list-backup ${barmanBackupServer} |tac|tail
-n2|head -n1| awk '{print $2}')
barman recover --jobs 4 --remote-ssh-command "ssh postgres@srv397"
--target-time "$(date +%Y-%m-%d) 23:30:00.000" --target-action pause
${barmanBackupServer} ${barmanBackupID}
/mnt/data1/postgresql_postgresqlbackupintegritychecker_general/data
else
echo "server name expected as the only argument of script"
fi

> Every N minutes you copy the WAL files to the backup server?
Currently I use barman configured with replication slot (for minimum data
loss without beeing synchronous) and wal archiving to the backup server.
Wal archiving to the backup server is done through archive_command of
postgresql.conf

Is it answer you questions?

Le mar. 4 sept. 2018 à 13:42, Ron  a écrit :

> On 03/09/2018 08:56 AM, David Steele wrote:
> [snip]
> >> About pgBarman, I like :
> >> - be able restore on a remote server from the backup server
> > This a good feature, and one that has been requested for pgBackRest. You
> > can do this fairly trivially with ssh, however, so it generally hasn't
> > been a big deal for people.  Is there a particular reason you need this
> > feature?
>
> (Sorry to dredge up this old thread.)
>
> Do you just change the IP address of the "restore target"?
>
> >> - use replication slots for backingup wal on the backup server.
> > Another good feature.  We have not added it yet because pgBackRest was
> > originally written for very high-volume clusters (100K+ WAL per day) and
> > our parallel async feature answers that need much better.  We recommend
> > a replicated standby for more update-to-date data.
>
> Every N minutes you copy the WAL files to the backup server?
>
>
> --
> Angular momentum makes the world go 'round.
>
>


Re: Barman versus pgBackRest

2018-09-04 Thread Thomas Poty
Your problem looks like this one ;-)
https://groups.google.com/forum/#!topic/pgbarman/kXcEpSLhw8w
answer may help

Physical backup/restore operates on a whole cluster...

Le mar. 4 sept. 2018 à 14:47, Ron  a écrit :

> On 09/04/2018 07:14 AM, Thomas Poty wrote:
>
> > Do you just change the IP address of the "restore target"?
> Do you expect a typical restore command?
>
>
> I'm investigating barman and pgBackRest to replace our exitsing NetBackup
> system, so don't know what you mean by "typical restore command".
>
> Here are our typical use cases:
>
> 1. If my barman backup server has full backups, diffs and WALs for
> database server MAIN_PG_SERVER, which hosts databases D1, D2 and D3, how
> much work is it to do a PITR restore of D2 to a *different* Pg server?
>
> 2. Can I restore an older copy of database D2 to MAIN_PG_SERVER, *giving
> it a new name* (so that now there would be databases D1, D2, D3 *and
> D2_OLD*)?  That's pretty trivial on SQL Server, and something I've had to
> do before so the operations staff can research a problem.)
>
> Thanks
>
> --
> Angular momentum makes the world go 'round.
>


increasing HA

2018-09-04 Thread Thomas Poty
Hello,

I am looking after some advice about solution allowing to increase High
availability?

Here is a bit of context :

I have an Master-Slave architecture
 - 1 master
 - 2 asynchronous slaves using replication slot
 - backup is made with barman using replication slot
 - Wal archiving is done towards barman server

I think 3 axes could be improved ( I am totaly novice with these):

- using of a proxy
 I found HAproxy.
 Could you advice any others solutions to explore or share your experience?

- using an automatick failover
 I found PAF
 Could you advice any others solutions to explore or share your experience?

- using a tool for fencing a failing node
Ideally, I aimagine to disable network traffic in I/O to prevent client
connecting and exchange between backup server failling server + on
postgesql server disable automatic restart of the service.
Could you share you experience about it?

- Maybe an other axe to explore ?

Thank you

Thomas


Re: increasing HA

2018-09-05 Thread Thomas Poty
Hi Jehan-Guillaume,

Thanks for your opinion.

At first glance, i may use for automatic failover PAF, a proxy HAproxy and
for fencincg, i am a bit disappointed, i don't know what to do/use

How about you, do you have any preference about tools/solutions to use ?

now, I am aware that i will have to check and document limitation/risk...

Le mer. 5 sept. 2018 à 11:38, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> a écrit :

> Hi all,
>
> On Tue, 4 Sep 2018 15:09:51 +
> ROS Didier  wrote:
>
> > Hi
> >I have made a lot of PostgreSQL High Availability tests
> (more
> > than 20 by solution) and the two following products respond well to the
> need :
> >
> > (1)Repmgr (2ndQuadrant)
> >
> > (2)Pglookout (aiven)
>
> Both solutions use a simple and naive implementation, which makes them
> easy to
> use and admin. However, it gives the responsibilities to the admin to deal
> with
> fencing, which is a mandatory piece in almost all kind of DB cluster if you
> want to cover most of the failure cases and avoid split brain.
>
> So yes, they are simple, because complexity is left to the admin skills. It
> kind of require you rewrote and test yourself part of the fencing stack of
> Pacemaker. Good luck.
>
> And I'm not speaking about watchdog here, which I just fail to imagine how
> the
> admin could implement it himself.
>
> Just consider how messy it is to deal with "logical fencing" when
> considering
> doing it with pgbouncer.
>
> In short: if you are ready to spend many dev/admin hours to build a safe HA
> cluster for your DB and set strict requirements, those are fine.
>
> > About PAF, the product is hard to install and set up . It need a linux
> > cluster and a system engineers team to use it.
>
> Indeed, Pacemaker has a steep learning cuve and documentation still
> requires
> some improvement. But HA is not an easy subject. Just look at RHEL or Suse
> requirements before their team accept to support your DB cluster (spoiler:
> fencing).
>
> Whatever solution you pick, you must **know and document** its limitations
> and
> risks.
>


Re: increasing HA

2018-09-05 Thread Thomas Poty
> In fact, PAF does not support slots. So it is not a good candidate if
slot are
> a requirement.
Effectively slots are a requirement we prefer to keep

> > a proxy HAproxy and
> > for fencincg, i am a bit disappointed, i don't know what to do/use

> Depend on your hardware or your virtualization technology.
Our production cluster (master and slave) runs on LXC container. Each LXC
container runs on a HPE Blade Server. The storage is on a SAN 3PAR array.
Any advice ?

> > How about you, do you have any preference about tools/solutions to use ?

> If you want a simple and well community adopted solution, pick Patroni.
It deals
> with slots, rely on etcd or zookeeper, fit nicely with haproxy, deal with
> watchdog to keep itself under monitor. However, it lacks of fencing and
its
> callback are asynchronous. You would have to take special care of your
> network and master connectivity upon primary failure.

I am looking after some infrmation about this solution on their doc/irc...
Your opinion about it is important for me by knowing  you maintain PAF :-)

> If you want something able to keep multiple services avaliable
(PostgreSQL, vIP,
> storage, pgBouncer, apache, whatever...), deal with dependencies,
locations,
> constraints, rules etc, pick Pacemaker (and a larger coffee machine). I
would
> (obviously) recommend PAF as resource agent for PgSQL, but you would have
to
> design your cluster without slots :/

many thanks

Le mer. 5 sept. 2018 à 14:15, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> a écrit :

> On Wed, 5 Sep 2018 13:23:41 +0200
> Thomas Poty  wrote:
>
> > Hi Jehan-Guillaume,
>
> Hello,
>
> > Thanks for your opinion.
> >
> > At first glance, i may use for automatic failover PAF,
>
> In fact, PAF does not support slots. So it is not a good candidate if slot
> are
> a requirement.
>
> > a proxy HAproxy and
> > for fencincg, i am a bit disappointed, i don't know what to do/use
>
> Depend on your hardware or your virtualization technology.
>
> > How about you, do you have any preference about tools/solutions to use ?
>
> If you want a simple and well community adopted solution, pick Patroni. It
> deals
> with slots, rely on etcd or zookeeper, fit nicely with haproxy, deal with
> watchdog to keep itself under monitor. However, it lacks of fencing and its
> callback are asynchronous. You would have to take special care of your
> network and master connectivity upon primary failure.
>
> If you want something able to keep multiple services avaliable
> (PostgreSQL, vIP,
> storage, pgBouncer, apache, whatever...), deal with dependencies,
> locations,
> constraints, rules etc, pick Pacemaker (and a larger coffee machine). I
> would
> (obviously) recommend PAF as resource agent for PgSQL, but you would have
> to
> design your cluster without slots :/
>
> ++
>


Re: increasing HA

2018-09-05 Thread Thomas Poty
> OK, so either patch PAF yourself (not recommended) or choose something
> else. Note that two other ways are working with Pacemaker:
>   * the pgsql resource agent (see FAQ of PAF)
>   * a shared disk architecture (no pgsql replication)
Probably, i will be interested by the solution "patroni-etcd-haproxy" you
suggested

> I suppose you could find fencing agents for:

> * the blade itself, but it would fence all the container running on it
> * the access to the SAN from the failing container

> I don't know if fencing agent exists for a container itself. Note that
I'm not
> familiar with the container world, I lack a lot of knowledge on this
> technology.

I am not familiar with this too. I heard that for the first time few days
ago by reading how to improve HA ... :-)

Many thanks for your opinions/advices

Le mer. 5 sept. 2018 à 15:44, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> a écrit :

> On Wed, 5 Sep 2018 15:06:21 +0200
> Thomas Poty  wrote:
>
> > > In fact, PAF does not support slots. So it is not a good candidate if
> > > slot are a requirement.
> > Effectively slots are a requirement we prefer to keep
>
> OK, so either patch PAF yourself (not recommended) or choose something
> else. Note that two other ways are working with Pacemaker:
>   * the pgsql resource agent (see FAQ of PAF)
>   * a shared disk architecture (no pgsql replication)
>
> > > > a proxy HAproxy and
> > > > for fencincg, i am a bit disappointed, i don't know what to do/use
> > > Depend on your hardware or your virtualization technology.
> > Our production cluster (master and slave) runs on LXC container. Each LXC
> > container runs on a HPE Blade Server. The storage is on a SAN 3PAR array.
> > Any advice ?
>
> I suppose you could find fencing agents for:
>
> * the blade itself, but it would fence all the container running on it
> * the access to the SAN from the failing container
>
> I don't know if fencing agent exists for a container itself. Note that I'm
> not
> familiar with the container world, I lack a lot of knowledge on this
> technology.
>
> ++
>


Re: increasing HA

2018-09-05 Thread Thomas Poty
> How many nodes do you want run? Keep in mind that with only 2 nodes,
fencing off the right one is by definition an undecidable problem. That's
one of the features of the current linux HA stack that makes one want to go
BSD/CARP.
>
> And if you want to run fully distributed, you might want to look at redis
in the first place.
We have one master and 2 slaves so 3 nodes

Le mer. 5 sept. 2018 à 16:42, Dmitri Maziuk  a
écrit :

> On Wed, 5 Sep 2018 13:23:41 +0200
> Thomas Poty  wrote:
>
> > At first glance, i may use for automatic failover PAF, a proxy HAproxy
> and
> > for fencincg, i am a bit disappointed, i don't know what to do/use
>
> How many nodes do you want run? Keep in mind that with only 2 nodes,
> fencing off the right one is by definition an undecidable problem. That's
> one of the features of the current linux HA stack that makes one want to go
> BSD/CARP.
>
> And if you want to run fully distributed, you might want to look at redis
> in the first place.
>
> --
> Dmitri Maziuk 
>
>


Re: increasing HA

2018-09-05 Thread Thomas Poty
I want to get :
Automatic failover (promoting a slave)
Automatic Routing traffic to master
Fencing in case of node failure.

I already have 2 asynchronous slaves in hot standby mode + Replication
slot. I don't want to add a new node.



Le mer. 5 sept. 2018 à 18:39, Dmitri Maziuk  a
écrit :

> On Wed, 5 Sep 2018 17:45:05 +0200
> Thomas Poty  wrote:
>
> > We have one master and 2 slaves so 3 nodes
>
> So what is the "HA" that you're trying to "increase"? Are you adding a 3rd
> slave? A 2nd master? A hot standby? All of the above?
>
> --
> Dmitri Maziuk 
>
>


Re: Idle connections / sessions

2018-12-12 Thread Thomas Poty
Hi Josef,

pg_terminator may help you.

thomas

Le mer. 12 déc. 2018 à 11:02, Fabio Pardi  a écrit :

> Hi Josef,
>
> please avoid cross posting to multiple lists.
>
> I m not a developer, but I think that if you do not want idle connections,
> you should terminate them on the side they have been created.
>
> If your application leaves the connection open, then you will notice idle
> connections on Postgres when not in use.
>
> regards,
>
> fabio pardi
>
>
> On 12/12/2018 10:37, Oygun Josef wrote:
>
> Hi,
>
>
>
> Is it possible to terminate idle connections/sessions automatically
> through a timeout in AWS or do I need to run a periodical cron job for this?
>
>
>
> Postgres version: 9.6.6
>
> Instance: db.t2.micro
>
> RAM : 1GB
>
>
>
> We are running a microservice architecture using docker with kubernetes
> and I can see that every pod on every node that has connected to the DB
> still has a idle connection as long as the node is still active even.
>
>
>
> It is both PGAdmin and the PostgreSQL JDBC Driver that leaves open idle
> connections.
>
>
>
>
>
> *Josef Oygun*
>
>
>
>
>


Re: Idle connections / sessions

2018-12-12 Thread Thomas Poty
Sorry but i don't know (i am not familiar with aws) 
Maybe this will  help: If you want pg_terminator can run on postgresql
server.

Regards
Thomas

Le mer. 12 déc. 2018 à 12:20, Oygun Josef  a écrit :

> Hi,
>
>
>
> Sorry for that!
>
>
>
> Thank you for the answers, this is good but do you know if there is a way
> do to it through AWS console with some kind of configuration instead?
>
>
>
> Reason for that is the we use terraform scripts to create and keep state
> of our instances so pg_terminator would require me to add every new db
> props to it.
>
>
>
> /Josef
>
>
>
>
>
> *Från:* Thomas Poty [mailto:thomas.p...@gmail.com]
> *Skickat:* den 12 december 2018 11:11
> *Till:* pgsql-general@lists.postgresql.org
> *Ämne:* Re: Idle connections / sessions
>
>
>
> Hi Josef,
>
>
>
> pg_terminator may help you.
>
>
>
> thomas
>
>
>
> Le mer. 12 déc. 2018 à 11:02, Fabio Pardi  a écrit :
>
> Hi Josef,
>
> please avoid cross posting to multiple lists.
>
> I m not a developer, but I think that if you do not want idle connections,
> you should terminate them on the side they have been created.
>
> If your application leaves the connection open, then you will notice idle
> connections on Postgres when not in use.
>
> regards,
>
> fabio pardi
>
> On 12/12/2018 10:37, Oygun Josef wrote:
>
> Hi,
>
>
>
> Is it possible to terminate idle connections/sessions automatically
> through a timeout in AWS or do I need to run a periodical cron job for this?
>
>
>
> Postgres version: 9.6.6
>
> Instance: db.t2.micro
>
> RAM : 1GB
>
>
>
> We are running a microservice architecture using docker with kubernetes
> and I can see that every pod on every node that has connected to the DB
> still has a idle connection as long as the node is still active even.
>
>
>
> It is both PGAdmin and the PostgreSQL JDBC Driver that leaves open idle
> connections.
>
>
>
>
>
> *Josef Oygun*
>
>
>
>
>
>


number and type of locks for an alter sequence

2019-01-28 Thread Thomas Poty
Hello ,

In PG10, when I am altering a sequence, i see 2 locks in pg_locks matching
my sequence:
a RowExclusiveLock
and a ShareRowExclusiveLock

My question is Why 2 locks?
Someone on irc said it was for catalog but i am no sure because
pg_locks.relationid is the same for the 2 locks.

At the same time, If I am trying to call nextval with an othersession, the
session wants to get a RowExclusiveLock on the sequence.

So, to me, the lock ShareRowExclusiveLock is when the sequence is beeing
altered.
And the RowExclusiveLock is for concurent access between alter/drop
sequence and nextval/setval/curval/lastval.

Documentation seems not talking about this... (
https://www.postgresql.org/docs/10/explicit-locking.html) Maybe should I
open a ticket?

Thanks


Major upgrade from 9.6.10 to 10.6 and pg_stat_statement update

2019-01-30 Thread Thomas Poty
Hi,

I am about to upgrade from 9.6.10 to 10.6 .
I read, in the release notes of the 9.6.11 and of the 10.6, an "alter
extension pg_stat_statements update" must be done.

So, after my upgrade, must the update of pg_statements be done ?

Thanks

Thomas


Re: Auto close idle connections for specific user (not by pg_cancel command)

2019-02-05 Thread Thomas Poty
Hi,
Maybe pgterminator could help you
https://github.com/trustly/pgterminator
Regards
Thomas

Le mar. 5 févr. 2019 à 18:52, Pavel Stehule  a
écrit :

> Hi
>
>
> út 5. 2. 2019 v 18:47 odesílatel Emi  napsal:
>
>> Hello,
>>
>> In psql, may I know is there a way/config that idle connections could be
>> auto closed (but not by pg_cancel_backend command) for specific user(s)
>> please?
>>
>> For example,
>> select  usename, waiting, query_start  from pg_stat_activity
>> where usename like 'connUser1' and
>>current_query='';
>>
>> All connections from connUser1, if idle more than 1 day, psql auto close
>> them.
>>
>
> What I know, there is not any timeout for this purpose.
>
> Maybe pgbouncer can do this https://pgbouncer.github.io/config.html
>
> client_idle_timeout
>
> Regards
>
> Pavel
>
>>
>> Thanks a lot.
>>
>


Re: Auto close idle connections for specific user (not by pg_cancel command)

2019-02-05 Thread Thomas Poty
Or you could use
Parameter idle_in_transaction_session_timeout

https://www.cybertec-postgresql.com/en/idle_in_transaction_session_timeout-terminating-idle-transactions-in-postgresql/

And use alter role in database set idle_in_transaction_session_timeout to...

https://www.postgresql.org/docs/10/sql-alterrole.html

Regards

Le mar. 5 févr. 2019 à 18:59, Thomas Poty  a écrit :

> Hi,
> Maybe pgterminator could help you
> https://github.com/trustly/pgterminator
> Regards
> Thomas
>
> Le mar. 5 févr. 2019 à 18:52, Pavel Stehule  a
> écrit :
>
>> Hi
>>
>>
>> út 5. 2. 2019 v 18:47 odesílatel Emi  napsal:
>>
>>> Hello,
>>>
>>> In psql, may I know is there a way/config that idle connections could be
>>> auto closed (but not by pg_cancel_backend command) for specific user(s)
>>> please?
>>>
>>> For example,
>>> select  usename, waiting, query_start  from pg_stat_activity
>>> where usename like 'connUser1' and
>>>current_query='';
>>>
>>> All connections from connUser1, if idle more than 1 day, psql auto close
>>> them.
>>>
>>
>> What I know, there is not any timeout for this purpose.
>>
>> Maybe pgbouncer can do this https://pgbouncer.github.io/config.html
>>
>> client_idle_timeout
>>
>> Regards
>>
>> Pavel
>>
>>>
>>> Thanks a lot.
>>>
>>


Re: Auto close idle connections for specific user (not by pg_cancel command)

2019-02-05 Thread Thomas Poty
You are right. So it is not suitable for your case. Sorry

Le mar. 5 févr. 2019 à 19:27, Ravi Krishna  a écrit :

> >
> >
> https://www.cybertec-postgresql.com/en/idle_in_transaction_session_timeout-terminating-idle-transactions-in-postgresql/
> >
> > And use alter role in database set idle_in_transaction_session_timeout
> to...
> >
> > https://www.postgresql.org/docs/10/sql-alterrole.html
>
> But this works only for those sessions which has started a transaction and
> not committed within the session_timeout period, right ?


Array of foreign key

2017-12-22 Thread Thomas Poty
Hello all,

I use postgresql 9.5 and I am looking for way to have a column which is an
array of a foreign key. If what I read is correct it is currently not
possible... Have you any idea how to implement this feature in a safe way
(foreign key constraints) ?
I don't have another idea to have a enum array

Thank you

Thomas


Re: Array of foreign key

2017-12-22 Thread Thomas Poty
Actually, We are migrating from Mysql to postgresql and we have to replace
sets of Mysql. The migration we will be done in 2 steps: First,  in a like
for like way. Second step will be using postgresql powerfulness

I think the most like for like way to migrate sets is using enum array.

Thank you Andreas.

Thomas


Le 22 déc. 2017 20:22, "Andreas Kretschmer"  a
écrit :

> On 22 December 2017 20:02:43 CET, Thomas Poty 
> wrote:
> >Hello all,
> >
> >I use postgresql 9.5 and I am looking for way to have a column which is
> >an
> >array of a foreign key. If what I read is correct it is currently not
> >possible... Have you any idea how to implement this feature in a safe
> >way
>
> Normalisation?
>
> Can you explain that a bit more, what's the use-case?
>
>
> Regards, Andreas
>
>
> --
> 2ndQuadrant - The PostgreSQL Support Company
>


Re: Array of foreign key

2017-12-23 Thread Thomas Poty
Good evening all,
I have just see all the messages. Thanks for that.

First, Peter Holzer has exactly understooden my need.
I am à bit disappointed this feature is not implemented. It would be great.

Then, I know enum is probably not the best choice but it is historic  in
our database. Like I said previously,  We are working on the first of two
steps migration and we will probably not use them after de second phase of
migration. You also have to know make this changes needs a lot of tests to
be sure the results will be correct and expected.! For our company it is
not so easy...

After that, we are running on pgsql 9.5 and centos 7.x.

Finally,  keep in mind we are here to help each others.

Thanks for your help

Thomas


Le 23 déc. 2017 20:25, "Peter J. Holzer"  a écrit :

On 2017-12-23 12:24:07 -0500, Melvin Davidson wrote:
> On Sat, Dec 23, 2017 at 12:09 PM, Peter J. Holzer 
wrote:
> >...Is there a way to
> >enforce foreign key constraints on the members of an array?
> >At insert time you can check with a trigger of course, and maybe there
> >is a way to do it in a check constraint
>
> I don't think you understand how Foreign Key constraints work in
PostgreSQL.
> PostgreSQL will prevent any insert where the value of a column is not
within
> the FK table.
> So you DO NOT need a check constraint or trigger.
>
> What I do not understand is your reference to a FK "array".

If you do not understand something, please ask. Don't claim that other
people "don't understand how X works" just because you don't know what
they are talking about. Also, please pay a bit of attention who you
are replying to. I am not the OP. I just understand what he wants (or at
least I think I do).

> So for the sake of CLARIDICATION, would you please
> 1, State the version of PostgreSQL
> 2. State the O/S

Why should I? You didn't state the OS and PostgreSQL version you use
either. And I don't think you should, as it is irrelevant for the
discussion.

> 3. Provide an example of an FK "array" that you are concerned with.

I think the OP wants something like this:

create table features (
id serial primary key,
name varchar not null
);

create table products (
id serial primary key,
name varchar not null,
array_of_features int[]
references [] features(id) -- XXX - invented syntax
);

where products.array_of_features can only contain integers from
features.id. This doesn't currently (as of 10.1) work.
As I wrote, you could use triggers, but there is no way to declare that
foreign key directly.

If you want foreign keys, you have to use an intersection table:

create table features (
id serial primary key,
name varchar not null
);

create table products (
id serial primary key,
name varchar not null
);

create table product_features (
product references products(id),
feature references features(id)
);

But that has a relatively high overhead both in storage and for queries.

I can understand why the OP wants that. I could have used something like
this in the past, too.

hp

--
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


Re: Sv: Re: Sv: Re: Sv: Re: Sv: Re: data-checksums

2018-01-10 Thread Thomas Poty
Hello,
A question seems to be, according to me, important :
How a corruption, detected thanks to data-checksums, is fixed?

Thank you,
Thomas

Le 10 janv. 2018 20:39, "Andres Freund"  a écrit :

> On 2018-01-09 20:51:17 -0500, Stephen Frost wrote:
> > Greetings,
> >
> > * Andreas Joseph Krogh (andr...@visena.com) wrote:
> > > Aha, so enabling CRC causes hint-bits to be written causing extra
> WAL-logging,
> > > which woudn't be the case without CRC enabled?
> > > Thanks for pointing that out.
> >
> > Yes, having checksums enabled forces logging of hint bits.  You can
> > enable wal_log_hints independently too, without having checksums, to see
> > what kind of an impact it'll have on your environment.
> >
> > A useful documentation update might be:
> >
> > ---
> > With checksums enabled, wal_log_hints 
> > will be enabled and each page read or write will involve calculating the
> > checksum for the page.
> > ---
> >
> > I'd probably just replace the "Enabling checksums may incur a noticeable
> > performance penalty" with the above, as it should be clear that doing
> > more work implies an impact on performance and that avoids the whole
> > question of trying to characterize in a general way something that can't
> > be generalized (as it's workload dependent).
>
> -1. I think this is underplaying the cost.
>
> Greetings,
>
> Andres Freund
>
>


weird result by changing type enum array to text array of a column

2018-02-05 Thread Thomas Poty
Hello,

I m running 9.5.10 on centos 7.

I have a colmun "TestFields", its type is an array of an enum.

I would like to change the type of this column by an array of text.

So i execute  -> alter table "controldetailpromoters" alter column
"TestFields" type text[]
The command seems to be correctly executed
 but  when I execute a "\d" of my table, the column "TestFields" is always
an array of the enum


Do you have any idea of what is going wrong?

Thank you.
Thomas


Re: weird result by changing type enum array to text array of a column

2018-02-05 Thread Thomas Poty
Hello Rob,

I already read this.

I have already executed  this kind of procedure except with an array.

I don't know why but now it is working :-s

Thank you for your reply :-)

Thomas


2018-02-05 13:16 GMT+01:00 rob stone :

>
>
> On Mon, 2018-02-05 at 11:26 +0100, Thomas Poty wrote:
> > Hello,
> >
> > I m running 9.5.10 on centos 7.
> >
> > I have a colmun "TestFields", its type is an array of an enum.
> >
> > I would like to change the type of this column by an array of text.
> >
> > So i execute  -> alter table "controldetailpromoters" alter column
> > "TestFields" type text[]
> > The command seems to be correctly executed
> >  but  when I execute a "\d" of my table, the column "TestFields" is
> > always an array of the enum
> >
> >
> > Do you have any idea of what is going wrong?
> >
> > Thank you.
> > Thomas
>
>
> Hello Thomas,
>
> See chapter 8.7.
>
> Your array "TestFields" contains the internal enum references that are
> converted to the defined values by cross referencing to the catalogue
> table pg_enum.
> I think you will have to add another column to the table as an array of
> text, write a little program to cross reference the TestFields array to
> pg_enum and update the new column with the textual representation of
> the "enum's", then alter your app accordingly.
>
> HTH,
> Rob
>