Re: adding more space to the existing 9.6 cluster

2019-02-21 Thread Thomas Boussekey
Hello all,

If I were in your situation, I would analyze if it could move only a part
of the 36 databases to the new disk.
* Either, I will move some of the databases to the new disk,
* Either, In the largest databases, I will consider to work in multiple
tablespace configuration, using the command ALTER TABLE <> SET
TABLESPACE <>; Link to the documentation:
https://www.postgresql.org/docs/9.6/sql-altertable.html . to move some
tables to the new disk. You can analyze (depending on your disk and DB
configurations, if it's better to move the very large tables or intensively
used tables.

I hope I'm clear enough!
Feel free to ask for clarification or add new elements to go further on!

Hope this helps,
Have a nice day,
Thomas

Le mer. 20 févr. 2019 à 21:37, Ron  a écrit :

> On 2/19/19 5:02 PM, Julie Nishimura wrote:
>
> Hello, we are almost out of space on our main data directory, and about to
> introduce new directory to our cluster. We cannot use multiple physical
> disks as a single volume, so we are thinking about creation new tablespace.
> Our current data_directory shows as follows:
> /data/postgresql/9.6/main
> postgres=# SELECT spcname FROM pg_tablespace;
>   spcname
> 
>  pg_default
>  pg_global
> (2 rows)
>
> We also have 36 existing databases on this cluster.
> If we add new directory, will it be enough to execute the following
> commands in order to force new data there:
>
> CREATE TABLESPACE tablespace01 LOCATION '/data/postgresql/9.6/main01';
>
> ALTER DATABASE db_name SET TABLESPACE
>
> tablespace01
>
> Do I need to repeat it for all our existing databases?
>
>
> Since the command is ALTER DATABASE , it seems that yes you
> have to do it for all of them.  A simple bash script should knock that out
> quickly.
>
>
> Should I change our "template*" dbs as well?
>
>
> If you want *new* databases to automatically go to tablespace01 then
> alter template1.
>
> Do I need to do something else?
>
>
> Maybe, depending on the size of your databases, and how much down time you
> can afford,
>
>
> https://www.postgresql.org/docs/9.6/sql-alterdatabase.html "This command
> physically moves any tables or indexes in the database's old default
> tablespace to the new tablespace."
>
> For example, our multi-TB databases are so big that moving it all at once
> is unreasonably slow.  And a failure might leave the db is a bad spot.
> Thus, I'd move one table at a time, a few per outage.
>
> Naturally, YMMV.
>
> Thank you for your advises.
>
>
>
>
> --
> Angular momentum makes the world go 'round.
>


Re: adding more space to the existing 9.6 cluster

2019-02-21 Thread Julie Nishimura
Thank you for the suggestions! We realized we cannot add more space to the 
existing cluster due to the hardware limitations. So, we decided to go the 
other route by introducing new standby on a new host with bigger size for data 
(with pg_basebackup and putting Master into archive mode), then promote it to 
the master. Do you have any idea how long it might take to run pg_basebackup 
for 21 tb database? Is there any gotcha we should be aware of? Thank you for 
your support and help

Sent from my iPhone

On Feb 21, 2019, at 12:18 AM, Thomas Boussekey 
mailto:thomas.bousse...@gmail.com>> wrote:

Hello all,

If I were in your situation, I would analyze if it could move only a part of 
the 36 databases to the new disk.
* Either, I will move some of the databases to the new disk,
* Either, In the largest databases, I will consider to work in multiple 
tablespace configuration, using the command ALTER TABLE <> SET 
TABLESPACE <>; Link to the documentation: 
https://www.postgresql.org/docs/9.6/sql-altertable.html . to move some tables 
to the new disk. You can analyze (depending on your disk and DB configurations, 
if it's better to move the very large tables or intensively used tables.

I hope I'm clear enough!
Feel free to ask for clarification or add new elements to go further on!

Hope this helps,
Have a nice day,
Thomas

Le mer. 20 févr. 2019 à 21:37, Ron 
mailto:ronljohnso...@gmail.com>> a écrit :
On 2/19/19 5:02 PM, Julie Nishimura wrote:
Hello, we are almost out of space on our main data directory, and about to 
introduce new directory to our cluster. We cannot use multiple physical disks 
as a single volume, so we are thinking about creation new tablespace.
Our current data_directory shows as follows:
/data/postgresql/9.6/main
postgres=# SELECT spcname FROM pg_tablespace;
  spcname

 pg_default
 pg_global
(2 rows)

We also have 36 existing databases on this cluster.
If we add new directory, will it be enough to execute the following commands in 
order to force new data there:

CREATE TABLESPACE tablespace01 LOCATION '/data/postgresql/9.6/main01';


ALTER DATABASE db_name SET TABLESPACE

tablespace01

Do I need to repeat it for all our existing databases?

Since the command is ALTER DATABASE , it seems that yes you have 
to do it for all of them.  A simple bash script should knock that out quickly.



Should I change our "template*" dbs as well?


If you want new databases to automatically go to tablespace01 then alter 
template1.


Do I need to do something else?


Maybe, depending on the size of your databases, and how much down time you can 
afford,


https://www.postgresql.org/docs/9.6/sql-alterdatabase.html "This command 
physically moves any tables or indexes in the database's old default tablespace 
to the new tablespace."

For example, our multi-TB databases are so big that moving it all at once is 
unreasonably slow.  And a failure might leave the db is a bad spot.  Thus, I'd 
move one table at a time, a few per outage.

Naturally, YMMV.


Thank you for your advises.




--
Angular momentum makes the world go 'round.


Re: Barman disaster recovery solution

2019-02-21 Thread Andreas Kretschmer




Am 21.02.19 um 08:17 schrieb Julie Nishimura:

Does anyone use this solution? any recommenations?

Thanks!


sure, many of our customers. why not?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Running from 9.6 backups sometimes fails with fatal error

2019-02-21 Thread Sergey Burladyan
Hi!

I need help with investigation what happened here. I have two different master 
servers with standby,
version: PostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian 9.6.10-1.pgdg90+1), 
compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit

and I create backups with pg_basebackup from theirs standbys, but sometimes 
backup check fails,
it just cannot start, for example:

server 1:
2019-02-18 14:18:51 MSK [11901-2] LOG:  checkpoint starting: end-of-recovery 
immediate wait
2019-02-18 14:18:51 MSK [11901-3] DEBUG:  performing replication slot checkpoint
2019-02-18 14:18:51 MSK [11901-4] LOG:  request to flush past end of generated 
WAL; request 44B/7E5DAB28, currpos 44B/7305B560
2019-02-18 14:18:51 MSK [11901-5] CONTEXT:  writing block 0 of relation 
base/16506/16891_vm
2019-02-18 14:18:51 MSK [11901-6] ERROR:  xlog flush request 44B/7E5DAB28 is 
not satisfied --- flushed only to 44B/7305B560
2019-02-18 14:18:51 MSK [11901-7] CONTEXT:  writing block 0 of relation 
base/16506/16891_vm
2019-02-18 14:18:51 MSK [11858-110] FATAL:  checkpoint request failed

server 2:
2019-02-07 16:07:40 MSK [7911-37] LOG:  checkpoint starting: end-of-recovery 
immediate wait
2019-02-07 16:07:40 MSK [7911-38] DEBUG:  performing replication slot checkpoint
2019-02-07 16:07:41 MSK [7911-39] LOG:  request to flush past end of generated 
WAL; request 1CB/8E94EE90, currpos 1CB/8A195210
2019-02-07 16:07:41 MSK [7911-40] CONTEXT:  writing block 0 of relation 
base/18413/18529_vm
2019-02-07 16:07:41 MSK [7911-41] ERROR:  xlog flush request 1CB/8E94EE90 is 
not satisfied --- flushed only to 1CB/8A195210
2019-02-07 16:07:41 MSK [7911-42] CONTEXT:  writing block 0 of relation 
base/18413/18529_vm
2019-02-07 16:07:41 MSK [7868-665] FATAL:  checkpoint request failed

backup checked with: PostgreSQL 9.6.11 on x86_64-pc-linux-gnu (Debian 
9.6.11-1.pgdg90+1), compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 
64-bit

recovery.conf:
restore_command = '/usr/local/bin/restore_cmd /mnt/wals %f %p'
recovery_target = 'immediate'

postgresql.conf:
wal_log_hints = on

How can I debug this?

-- 
Sergey Burladyan



Re: Barman disaster recovery solution

2019-02-21 Thread Edson Carlos Ericksson Richter



Em 21/02/2019 04:17, Julie Nishimura escreveu:

Does anyone use this solution? any recommenations?

Thanks!



We do use it.

IMHO, those are minimum recommendations:


1) start using it! It's easy and robust.

2) for minimal impact over production servers, setup replicated servers 
and create your backup from slave servers.


3) *_test your backups_*. This is a MUST HAVE - no option here.

4) have your backup server in different cities, or states, or even 
countries. Never, ever create a backup on the server at the side of your 
production server.


5) only communicate with your servers using SSH and private key 
certificates. Establish a PKI infrastructure in a way that production 
and backup servers only communicate using SSH and certificates.


6) your backup servers shall never ever be connected directly to the 
internet. Hackers love low attention backup servers running with minimal 
security.



No backup solution (no matter which one you choose) is 100% guaranteed: 
your disks may fail, your network mail fail, your memory may fail, files 
get corrupted - so, setup a regular "restore" to separate "test backup 
server" on daily basis. Having a virtual server for this purpose has 
minimal budget impact if any at all, and you save your sanity in case of 
a disaster.




Regards,



Edson






Re: Barman disaster recovery solution

2019-02-21 Thread Stephen Frost
Greetings,

* Edson Carlos Ericksson Richter (rich...@simkorp.com.br) wrote:
> No backup solution (no matter which one you choose) is 100% guaranteed: your
> disks may fail, your network mail fail, your memory may fail, files get
> corrupted - so, setup a regular "restore" to separate "test backup server"
> on daily basis. Having a virtual server for this purpose has minimal budget
> impact if any at all, and you save your sanity in case of a disaster.

While performing a straight restore is definitely good, to deal with the
in-place corruption risk of whatever your backup repository is, you really
need to do more than that.  If the middle of some index gets corrupted in
the backup, you may not notice it on the restore and even with casual use
of the restored server, which is why robust backup software really should
have a manifest of all the files in the backup and their checksums and
that should be checked on a restore.

One alternative, if your backup solution doesn't handle this for you,
and if you have page-level checksums enabled for your PG cluster (which I
strongly recommend...) would be to perform the complete restore and then
run pg_verify_checksums (or pg_checksums, depending on version) on the
restored cluster (note that you should bring the cluster up and let WAL
replay go through to at least reach consistency too...), which will
hopefully pick up on and report any latent corruption.

Note that doing something like a simple 'pg_dump' on the restored
cluster won't check the page-level checksums in indexes (or check indexes
at all), though that would provide you with a logical export of the data
that should be able to be imported into a new cluster (assuming you keep
the results of the pg_dump, that is..).

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: adding more space to the existing 9.6 cluster

2019-02-21 Thread Ron

No doubt it'll take a while...

You said you have 36 databases.  Could you move half of them using 
pg_dump/pg_restore over a few outage windows?  (Doing it in bite-sized 
pieces reduces risk.)


On 2/21/19 2:27 AM, Julie Nishimura wrote:
Thank you for the suggestions! We realized we cannot add more space to the 
existing cluster due to the hardware limitations. So, we decided to go the 
other route by introducing new standby on a new host with bigger size for 
data (with pg_basebackup and putting Master into archive mode), then 
promote it to the master. Do you have any idea how long it might take to 
run pg_basebackup for 21 tb database? Is there any gotcha we should be 
aware of? Thank you for your support and help


Sent from my iPhone

On Feb 21, 2019, at 12:18 AM, Thomas Boussekey > wrote:



Hello all,

If I were in your situation, I would analyze if it could move only a part 
of the 36 databases to the new disk.

* Either, I will move some of the databases to the new disk,
* Either, In the largest databases, I will consider to work in multiple 
tablespace configuration, using the command ALTER TABLE <> SET 
TABLESPACE <>; Link to the documentation: 
https://www.postgresql.org/docs/9.6/sql-altertable.html . to move some 
tables to the new disk. You can analyze (depending on your disk and DB 
configurations, if it's better to move the very large tables or 
intensively used tables.


I hope I'm clear enough!
Feel free to ask for clarification or add new elements to go further on!

Hope this helps,
Have a nice day,
Thomas

Le mer. 20 févr. 2019 à 21:37, Ron > a écrit :


On 2/19/19 5:02 PM, Julie Nishimura wrote:

Hello, we are almost out of space on our main data directory, and
about to introduce new directory to our cluster. We cannot use
multiple physical disks as a single volume, so we are thinking about
creation new tablespace.
Our current data_directory shows as follows:
/data/postgresql/9.6/main
postgres=# SELECT spcname FROM pg_tablespace;
  spcname

 pg_default
 pg_global
(2 rows)

We also have 36 existing databases on this cluster.
If we add new directory, will it be enough to execute the following
commands in order to force new data there:
CREATE TABLESPACE tablespace01 LOCATION '/data/postgresql/9.6/main01';
ALTER DATABASE db_name SET TABLESPACE
tablespace01

Do I need to repeat it for all our existing databases?


Since the command is ALTER DATABASE , it seems that yes
you have to do it for all of them.  A simple bash script should knock
that out quickly.



Should I change our "template*" dbs as well?


If you want *new* databases to automatically go to tablespace01 then
alter template1.


Do I need to do something else?


Maybe, depending on the size of your databases, and how much down
time you can afford,


https://www.postgresql.org/docs/9.6/sql-alterdatabase.html "This
command physically moves any tables or indexes in the database's old
default tablespace to the new tablespace."

For example, our multi-TB databases are so big that moving it all at
once is unreasonably slow.  And a failure might leave the db is a bad
spot.  Thus, I'd move one table at a time, a few per outage.

Naturally, YMMV.


Thank you for your advises.




-- 
Angular momentum makes the world go 'round.




--
Angular momentum makes the world go 'round.


Re: procedures and transactions

2019-02-21 Thread Peter Eisentraut
On 2019-02-20 17:45, Rob Nikander wrote:
>> On Feb 20, 2019, at 10:07 AM, Peter Eisentraut 
>>  wrote:
>>
>> You can run SET TRANSACTION ISOLATION LEVEL in a procedure.
> 
> I tried that before but I get this error:
> 
> create or replace procedure t_test(n integer)
> as $$
> begin

You need to commit or rollback the preceding transaction here.  Yeah I
know it's a bit weird.

> set transaction isolation level serializable;
> raise notice 'current isolation level: %', (select 
> current_setting('transaction_isolation'));
> raise notice 'current txid: %', (select txid_current());
> end;
> $$ language plpgsql;
> 
> mydb=# call t_test(1);
> ERROR:  SET TRANSACTION ISOLATION LEVEL must be called before any query
> CONTEXT:  SQL statement "SET transaction isolation level serializable"

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Channel binding not supported using scram-sha-256 passwords

2019-02-21 Thread Peter Eisentraut
On 2019-02-21 05:47, Michael Paquier wrote:
>   if (conn->ssl_in_use)
> + {
> + /*
> +  * The server has offered SCRAM-SHA-256-PLUS, 
> which is only
> +  * supported by the client if a hash of the 
> peer certificate
> +  * can be created.
> +  */
> +#ifdef HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH
>   selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
> +#endif
> + }

Is that right?  Won't we then just select nothing if the macro is not
defined?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Copy entire schema A to a different schema B

2019-02-21 Thread Tiffany Thang
Thanks everyone. Unfortunately the schema rename would not work since the
source database will be our production system. We have not gone live yet
but the system is expected to be constantly used.

I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not work for a single table but would be beneficial
if I have multiple tables to dump.

I'm thinking maybe using what Adrian has suggested with the -f option and
then modify the file or maybe use a common public schema everywhere on the
source and target databases. I would have to restrict who has access to the
public schema.

Thanks.

Tiff

On Wed, Feb 20, 2019 at 11:14 PM Jiří Fejfar  wrote:

> Hi Tiffany, Isn't it possible for you to do:
>
> 1) rename temporarily schema testuser1 to testuser2
> 2) dump that schema into binary format
> 3 rename back schema testuser2 to testuser1
> 4 restore backup to testuser2 schema on other DB?
>
> Jura.
>
> On Wed, 20 Feb 2019 at 23:23, Tiffany Thang 
> wrote:
> >
> > Hi Adrian,
> > I managed to backup my table in parallel using -Fd but I'm back to my
> original issue where I could not restore the table to a different schema.
> >
> > For example,
> > I would like to backup testuser1.mytable and restore it to
> testuser2.mytable.
> >
> > pg_dump -U testuser1 -Fd -f c:\temp\testuser1 -j 8 -t mytable -h
> myserver testdb
> >
> > where mytable is in testuser1 schema
> >
> > The dump completed fine but when I attempted to restore the table using
> pg_restore to another database, it tried to create the table in testuser1
> schema. The restore failed since testuser1 schema does not exist in the
> target database. When I created a testuser1 schema in the target database,
> the restore worked fine. Since the dump toc is in binary format, I could
> not make the change to reflect the new target schema, testuser2.
> >
> > So, how should I go about restoring tables from one schema to a
> different schema name?
> >
> > Thanks.
> >
> > Tiff
> >
> > On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver <
> adrian.kla...@aklaver.com> wrote:
> >>
> >> On 2/11/19 8:30 AM, Tiffany Thang wrote:
> >> > Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
> >> > achieve was to dump the schema quickly and be able to restore a single
> >> > or subset of objects from the dump. As far as I understand, the only
> way
> >> > of achieving that is to use the custom format and the -j option. Is
> that
> >> > correct? Are there any other alternatives?
> >>
> >> If you want to use -j then you need to use the -Fd output:
> >>
> >> https://www.postgresql.org/docs/10/app-pgdump.html
> >>
> >> "-j njobs
> >> --jobs=njobs
> >>
> >>  Run the dump in parallel by dumping njobs tables simultaneously.
> >> This option reduces the time of the dump but it also increases the load
> >> on the database server. You can only use this option with the directory
> >> output format because this is the only output format where multiple
> >> processes can write their data at the same time."
> >>
> >> If you need to grab just a subset of the schema then there are options
> >> to do that depending on the object. From above link as examples:
> >>
> >> "-n schema
> >> --schema=schema
> >>
> >>  Dump only schemas matching schema; this selects both the schema
> >> itself, and all its contained objects. ..."
> >>
> >>
> >> "-t table
> >> --table=table
> >>
> >>  Dump only tables with names matching table.  .."
> >>
> >>
> >> >
> >> > Thanks.
> >> >
> >> > Tiff
> >> >
> >> > On Mon, Feb 11, 2019 at 11:10 AM Ron  >> > > wrote:
> >> >
> >> > On 2/11/19 10:00 AM, Tiffany Thang wrote:
> >> >  > Hi,
> >> >  > To copy the source schema A to target schema B in the same
> >> > database in
> >> >  > PG10.3, I use psql to dump schema A and manually removes
> anything
> >> > specific
> >> >  > to the schema in the text dump file before importing into
> schema
> >> > B. How do
> >> >  > I achieve the same exporting from Schema A and importing into
> >> > schema B
> >> >  > using pg_dump with the -Fc option? Since the dump file
> generated is
> >> >  > binary, I could not make modifications to the file. Is the
> >> > procedure the
> >> >  > same in version 11?
> >> >
> >> > Why do you need to use "--format=custom" instead of
> "--format=plain"?
> >> >
> >> > For example:
> >> > $ pg_dump --format=plain --schema-only --schema=A
> >> >
> >> >
> >> > --
> >> > Angular momentum makes the world go 'round.
> >> >
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
>


Hot_standby_feedback = on

2019-02-21 Thread dangal


Dear, I have a query to make you
As much as you enable hot_standby_feedback = on, queries continue to be
canceled, in addition to enabling this parameter you have to modify any of
these?
#max_standby_archive_delay = 30s
#max_standby_streaming_delay = 30s
#wal_receiver_status_interval = 10s
The idea that we have is to have a primary, a standby for high availability
and a second standby database in cascade from the replica to be able to
execute reports and not interfere with the other two databases



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Copy entire schema A to a different schema B

2019-02-21 Thread Adrian Klaver

On 2/21/19 11:52 AM, Tiffany Thang wrote:
Thanks everyone. Unfortunately the schema rename would not work since 
the source database will be our production system. We have not gone live 
yet but the system is expected to be constantly used.


I have multiple tables that I need to export ranging from 20GB to 60GB 
each. The parallel will not work for a single table but would be 
beneficial if I have multiple tables to dump.


I'm thinking maybe using what Adrian has suggested with the -f option 
and then modify the file or maybe use a common public schema everywhere 
on the source and target databases. I would have to restrict who has 
access to the public schema.


You can further break this down by using -s and -a switches to only work 
with the table definitions and table data respectively. This can also be 
done on the pg_dump end.




Thanks.

Tiff




 >> --
 >> Adrian Klaver
 >> adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: [GENERAL] Modified rows are not marked as dead and as such vacuum is unable to clean them up

2019-02-21 Thread Tamás András Kálmán
Hi Tom,

just cleaned house in my mailbox and found this email that got buried. You
were spot on with the prepared transactions, we had some hung Hibernate
threads that were acting up.

Sorry for resurrecting this ages old thread, but wanted to say a big fat
*THANKS*!

Cheers,
Tamas Kalman.

On Thu, Jul 6, 2017 at 11:25 PM Tom Lane  wrote:

> =?UTF-8?B?VGFtw6FzIEFuZHLDoXMgS8OhbG3DoW4=?= 
> writes:
> > we have a PostgreSQL 9.3.4 database with multiple large tables, that keep
> > growing.
> > Looking at the pg_stat of  the front runner table that is currently
> about 1
> > Gb  big, it seems, that stale row data is not marked as dead after
> updates,
>
> Have you checked for uncommitted prepared transactions?
> If "select * from pg_prepared_xacts" finds anything, that's
> probably your problem.
>
> regards, tom lane
>


Re: Connection string for Java to connect to PostgreSQL, using client certificates

2019-02-21 Thread s400t
Hello again Rob,

Thank you for pointing that.

Now what I did:

1. Copied the server.crt created on the postgresqlSERVER's /var/lib/CA/server 
directory to client side.

2. Ran this script:|

openssl x509 -in server.crt -out server.crt.der -outform der


3. keytool -keystore $JAVA_HOME/jre/lib/security/cacerts -alias postgresql 
-import -file server.crt.der

--- some message---
Trust this certificate? [no]:  yes
Certificate was added to keystore


4. In my connection string, I added these lines, with hints from that site you 
mentioned.



props.setProperty("trustStore", 
"/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts");
props.setProperty("trustStorePassword", "changeit"); (I entered that password 
when I ran script #3)


... and I still got

org.postgresql.util.PSQLException: Could not read SSL key file 
/home/user1/cert/postgresql.key.


for bonus I had one extra error..
Caused by: java.io.IOException: extra data given to DerValue constructor



After spending better part of the morning, and going through different errors, 
I came to this site:
https://postgresrocks.enterprisedb.com/t5/EDB-Guides/How-to-setup-SSL-authentication/ba-p/1647




This one (#5.5) helped:
5.5 convert the client key in DER format:

openssl pkcs8 -topk8 -outform DER -in postgresql.key -out postgresql.key.pk8 
-nocrypt



Yes, instead of
props.setProperty("sslkey","/home/user1/cert/postgresql.key");

I used  
props.setProperty("sslkey","/home/user1/cert/postgresql.key.pk8"); 

and it worked!

My final connection string:

    String url = "jdbc:postgresql://192.168.56.102:5432/testdb";

    Properties props = new Properties();
    props.setProperty("user","user1");
    props.setProperty("ssl","true");
    props.setProperty("sslmode","verify-ca");
    props.setProperty("sslrootcert","/home/user1/cert/root.crt");
    props.setProperty("sslkey","/home/user1/cert/postgresql.key.pk8");
    props.setProperty("sslcert","/home/user1/cert/postgresql.crt");
        
    c = DriverManager.getConnection(url,props);

I did find many sites mentioning the need for making the server certificate 
available to Java, but I don't know why mine didn't work.
For now, immediate problem has been solved.

Cheers!









- Original Message -
> From: rob stone 
> To: s4...@yahoo.co.jp; "pgsql-general@lists.postgresql.org" 
> 
> Cc: 
> Date: 2019/2/20, Wed 21:41
> Subject: Re: Connection string for Java to connect to PostgreSQL, using 
> client certificates
> 
> Hello,
> 
> On Thu, 2019-02-21 at 13:10 +0900, s4...@yahoo.co.jp wrote:
>>  I am having hard time to connect to PostgreSQL server using client
>>  certificate from within a Java program.
>>  Any insight would be helpful.
>> 
>>  I can connect to the server using psql command line from a client
>>  machine(192.168.56.101) (psql -h 192.168.56.102 -U user1 -d testdb)
>>  [192.168.56.102 is "postgreSERVER" machine)
>>  //
>>  successful outcome looks like this:
>>  psql (9.6.10)
>>  SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-
>>  SHA384, bits: 256, compression: off)
>>  Type "help" for help.
>> 
>>  user1=# 
>>  //---
>> 
>>  However, I have been unable to connect using a Java connection
>>  string.
>> 
>>  This is more like ssl/certificate issue, and only tangentially
>>  related to the postgreSQL, but since I want to make it work using
>>  Java (running a test program from Eclipse), I am trying my luck here.
>> 
>>  I started by creating a CA, server side key and certificate, and
>>  client side key and certificate. This I learnt by watching a Youtube
>>  video (https://www.youtube.com/watch?v=FWK3lR6bSn8 ).
>> 
>>  For my own memo, I am reproducing the steps to create certificates
>>  and keys below, copied directly from that youtube:
>> 
>>  After creating those files, I copied the server side files to
>>  /etc/postgresql/9.6/main/) (I am using Debian, and "data" 
> directory
>>  seems to be "/etc/postgresql/9.6/main/").
>>  and the client side files to /home/user1/.postgresql folder. (had to
>>  created ".postgresql" folder)
>>  The files were chmodded to 600.
>>  And when I used psql from a client machine (Debian), I can connect
>>  happily as I mentioned above.
>> 
>>  Now for the Java test:
>>  I copied the "client side" files to /home/user1/cert/ (created 
> "cert"
>>  folder) 
>> 
>>  The files are: 
>>  postgresql.crt (1)
>>  postgresql.key (2)
>>  root.crt (3)
>> 
>> 
>>  (1)originally created as "client.crt"
>>  in 192.168.56.102:/var/lib/CA/client/, and copied as postgresql.crt
>>  to the client side
>>  (2)originally created as "client.key"
>>  in 192.168.56.102:/var/lib/CA/client/, and copied as postgresql.key
>>  (3)originally created as "rootCA.crt" in 
> 192.168.56.102:/var/lib/CA/,
>>  and copied as "root.crt"  
>> 
>>  My connection string is:
>> 
>>  Connection c = null;
>>  Statement st = null;
>> 
>> 
>>  try {
>>           Class.forName("org.postgresql.Driver");
>>           
>>           

Re: Copy entire schema A to a different schema B

2019-02-21 Thread Melvin Davidson
Tiffany, have you tried the clone_schema function? It seems to me it does
exactly what you need, no dumping or restoring. There is
even an option to copy the data or not. Default is not.

On Thu, Feb 21, 2019 at 3:23 PM Adrian Klaver 
wrote:

> On 2/21/19 11:52 AM, Tiffany Thang wrote:
> > Thanks everyone. Unfortunately the schema rename would not work since
> > the source database will be our production system. We have not gone live
> > yet but the system is expected to be constantly used.
> >
> > I have multiple tables that I need to export ranging from 20GB to 60GB
> > each. The parallel will not work for a single table but would be
> > beneficial if I have multiple tables to dump.
> >
> > I'm thinking maybe using what Adrian has suggested with the -f option
> > and then modify the file or maybe use a common public schema everywhere
> > on the source and target databases. I would have to restrict who has
> > access to the public schema.
>
> You can further break this down by using -s and -a switches to only work
> with the table definitions and table data respectively. This can also be
> done on the pg_dump end.
>
> >
> > Thanks.
> >
> > Tiff
> >
>
> >  >> --
> >  >> Adrian Klaver
> >  >> adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: pg_stat_statements doesn't track commit from pl/pgsql blocks

2019-02-21 Thread Bruce Momjian
On Sun, Feb 17, 2019 at 02:52:07PM -0700, legrand legrand wrote:
> Hello,
> 
> It seems that pgss doesn't track commit (nor rollback) commands from
> pl/pgsql blocks.
> using psql in version 11.1:
> 
> select pg_stat_statements_reset();
> do $$ begin commit; end $$;
> select calls,query from pg_stat_statements;

Uh, can you show me exactly what you were hoping to see?  I see:

CREATE EXTENSION pg_stat_statements;

SELECT pg_stat_statements_reset();
 pg_stat_statements_reset
--

DO $$ BEGIN COMMIT; END $$;

SELECT calls,query FROM pg_stat_statements;
 calls |   query
---+---
 1 | select pg_stat_statements_reset()
 1 | do $$ begin commit; end $$

You wanted to see the 'commit'?  That is not a client-supplied command
and is not tracked, and I am not sure we would want to do that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-21 Thread Bruce Momjian
On Tue, Feb 19, 2019 at 12:25:24PM -0500, Stephen Frost wrote:
> Ah, right, I forgot that it did that, fair enough.
> 
> I've never been thrilled with that particular approach due to the
> inherent risks of people messing directly with files like pg_control,
> but that's how it is for now.

There was too much concern that users would accidentally start the old
server at some later point, and its files would be hard linked to the
new live server, leading to disaster.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-21 Thread Stephen Frost
Greetings,

* Bruce Momjian (br...@momjian.us) wrote:
> On Tue, Feb 19, 2019 at 12:25:24PM -0500, Stephen Frost wrote:
> > Ah, right, I forgot that it did that, fair enough.
> > 
> > I've never been thrilled with that particular approach due to the
> > inherent risks of people messing directly with files like pg_control,
> > but that's how it is for now.
> 
> There was too much concern that users would accidentally start the old
> server at some later point, and its files would be hard linked to the
> new live server, leading to disaster.

Sure, I understand that concern, just wish there was a better approach
we could use for "DO NOT START THIS SERVER" rather than moving of the
pg_control file.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-21 Thread Bruce Momjian
On Thu, Feb 21, 2019 at 09:31:32PM -0500, Stephen Frost wrote:
> Greetings,
> 
> * Bruce Momjian (br...@momjian.us) wrote:
> > On Tue, Feb 19, 2019 at 12:25:24PM -0500, Stephen Frost wrote:
> > > Ah, right, I forgot that it did that, fair enough.
> > > 
> > > I've never been thrilled with that particular approach due to the
> > > inherent risks of people messing directly with files like pg_control,
> > > but that's how it is for now.
> > 
> > There was too much concern that users would accidentally start the old
> > server at some later point, and its files would be hard linked to the
> > new live server, leading to disaster.
> 
> Sure, I understand that concern, just wish there was a better approach
> we could use for "DO NOT START THIS SERVER" rather than moving of the
> pg_control file.

As ugly as it is, I have never heard of a better solution.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Postgresql RDS DB Latency Chossing Hash join Plan

2019-02-21 Thread github kran
Thanks for the feedback.

On Tue, Feb 19, 2019 at 11:12 AM Michael Lewis  wrote:

> 1) You can increase it as much as you want but (auto)analyze will take
> longer to examine the values of default_stat_target * 300 rows and compute
> the most common values and the frequencies of those values. How much
> variation does you data actually have? If your data only has 50 distinct
> values with fairly even distribution, then no need to increase it from 100
> even. Oh, deciding on the best query plan will take a little more time for
> the optimizer since it will be examining bigger collection of stats on the
> tables that you have increased.
>
> 2) I am not aware.
>
> 3) I am not aware of anything about your application so I can't recommend
> any number outright, but 4MB for work_mem definitely seems low to me
> assuming you have 16GB or more memory available unless you have very high
> concurrency. It will depend on how many sorts per statement, how many
> users, etc. If you spill over to disk on routine operations, then things
> are definitely going to be much slower than if you are able to keep things
> in memory. You could try running explain analyze and just verify that you
> are keeping things in memory. You could also turn on automatic gathering of
> explain analyze plans on live if you have the room for logging and can
> tolerate just a little latency.
>
> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
>
> https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
>
> I'm glad your performance is improved in AWS/dev environment. It can be a
> big hassle to test things in an environment that performs significantly
> different.
>
>
> *Michael Lewis*
>
>
> On Sun, Feb 17, 2019 at 10:01 AM github kran  wrote:
>
>>
>>
>> On Thu, Feb 14, 2019 at 4:58 PM Michael Lewis  wrote:
>>
>>> This is beyond my expertise except to say that if your storage is SSDs
>>> in AWS, then you definitely want random_page_cost close to the same as
>>> seq_page_cost (1 by default) assuming your data is likely to be in cache as
>>> discussed in the documentation
>>> .
>>> As it says- "Reducing this value relative to seq_page_cost will cause the
>>> system to prefer index scans" as you saw. Changing the value on production
>>> would again depend on the storage type used, and how good the cache hit
>>> rate is.
>>>
>>> As far as I know, dropping old partitions should not be significantly
>>> impactful to the system other than no longer needing to store that data
>>> (cost, time for full backups, etc).
>>>
>>> Again, as I understand things, there is not a big impact from having old
>>> unused tables in terms of maintenance. They should be ignored by normal
>>> processes.
>>>
>>> Glad you got your issue resolved.
>>>
>>>
>>> *Michael Lewis*
>>>
>>
>>  Thanks for the feedback.You have been giving your
>> thoughts/suggestions since the beginning of the case. It was helpful.  I
>> think I realized later based on your suggestion to increase the default
>> statistics target from 100. It was not correctly initially
>>  as I had that set at session level without setting them on the
>> partition tables. As next steps I have the stats to 1000 on all of the
>> partitiion tables we use from Nov 2018 to Feb 19 and then did ANALYZE. Iam
>> currently running a load test to test
>> how the DB performance is behaving right now and so far its running
>> good than before. ( I have reset all the previous changes done except the
>> statistics change).  I will keep you posted after the test finishes
>>
>> Questions.
>> 1)  Can i further increase the Setting to 3000 and see the system
>> behaves. ?. How do I know the best value to be used for my database in
>> terms of the sampling limit with the default statistics setting ?.
>>  2) Apart from analyzing the tables do I need to do any other
>> changes  with the statistics setting ?
>>  3)  Also the current work mem is set to 4 MB and we didnt play with
>> this value so far. For future needs can I increase the WORK MEM setting ?.
>>
>> Appreciate your reply.
>>
>> Thanks
>>
>>
>>> On Thu, Feb 14, 2019 at 3:11 PM github kran 
>>> wrote:
>>>


 On Thu, Feb 14, 2019 at 12:43 PM Michael Lewis 
 wrote:

> How many total rows in these tables? I am assuming these are
> partitions and those 100 million rows got distributed. If the data
> difference is significant, then you aren't guaranteed similar performance.
> You may want to follow more of the suggested steps on.
>
> https://wiki.postgresql.org/wiki/SlowQueryQuestions
>
>
> *Michael Lewis  |  Software Engineer*
> *Entrata*
> *c: **619.370.8697 <619-370-8697>*
>
> Michael - Yes correct the data of 100 million rows is distributed to
 all the partitions.
 FInally I feel we have come to conclusion after we changed the
 random_

Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-21 Thread Tom Lane
Bruce Momjian  writes:
> On Thu, Feb 21, 2019 at 09:31:32PM -0500, Stephen Frost wrote:
>> * Bruce Momjian (br...@momjian.us) wrote:
>>> There was too much concern that users would accidentally start the old
>>> server at some later point, and its files would be hard linked to the
>>> new live server, leading to disaster.

>> Sure, I understand that concern, just wish there was a better approach
>> we could use for "DO NOT START THIS SERVER" rather than moving of the
>> pg_control file.

> As ugly as it is, I have never heard of a better solution.

system("rm -rf $OLDPGDATA") ... nah, that is not a better idea.

regards, tom lane



How many billion rows of data I can store in PostgreSQL RDS.

2019-02-21 Thread github kran
Hello Pgsql-General,

We have currently have around 6 TB of data and have plans to move some
historic datainto RDS of about close to 1 TB of data. The total rows in
partitioned tables is around 6 billion rows today and have plans to keep
the data long term which would be around 5-8 billion rows per year.

So i short my use case is to keep data of 8 billion rows of data every year
and store atleast 16 billion of rows for every 2 years.

   1. How many billion rows does RDS handle ?. This data would be exposed
   by AP's accessing this data.

Appreciate you reply on this.
Thank you.


Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-21 Thread Adrian Klaver

On 2/21/19 9:08 PM, github kran wrote:

Hello Pgsql-General,

We have currently have around 6 TB of data and have plans to move some 
historic datainto RDS of about close to 1 TB of data. The total rows in 
partitioned tables is around 6 billion rows today and have plans to keep 
the data long term which would be around 5-8 billion rows per year.


So i short my use case is to keep data of 8 billion rows of data every 
year and store atleast 16 billion of rows for every 2 years.


 1. How many billion rows does RDS handle ?. This data would be exposed
by AP's accessing this data.

Appreciate you reply on this.


This would be a question for AWS RDS support.


Thank you.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Channel binding not supported using scram-sha-256 passwords

2019-02-21 Thread Michael Paquier
On Thu, Feb 21, 2019 at 08:32:01PM +0100, Peter Eisentraut wrote:
> On 2019-02-21 05:47, Michael Paquier wrote:
>>  if (conn->ssl_in_use)
>> +{
>> +/*
>> + * The server has offered SCRAM-SHA-256-PLUS, 
>> which is only
>> + * supported by the client if a hash of the 
>> peer certificate
>> + * can be created.
>> + */
>> +#ifdef HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH
>>  selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
>> +#endif
>> +}
> 
> Is that right?  Won't we then just select nothing if the macro is not
> defined?

In the context of an SSL connection, the server would send both SCRAM
and SCRAM_PLUS as valid mechanisms if it supports channel binding
(HAVE_BE_TLS_GET_CERTIFICATE_HASH).  If the server does not support
channel binding, then only SCRAM is sent.

So you have the following possible cases for an SSL connection:
1) Server supports channel binding, sends SCRAM_PLUS and SCRAM as
allowed mechanisms.
1-1) Client supports channel binding, and it chooses SCRAM_PLUS.
1-2) Client does not support channel binding, and it chooses SCRAM.
2) Server does not support channel binding, sends SCRAM as allow
mechanism.
2-1) Client supports channel binding, still it has no choice but to
choose SCRAM.
2-2) Client does not support channel binding, it chooses SCRAM.

On HEAD, the bug you have spotted would cause case 1-2) to fail as the
client thinks that it should choose SCRAM_PLUS even if it does not
support it, causing a failure.  My patch changes things so as SCRAM
gets chosen instead of SCRAM_PLUS as the server sends both SCRAM and
SCRAM_PLUS to the client as possible choices, still the client is not
able to handle SCRAM_PLUS.

Does this explanation make sense?
--
Michael


signature.asc
Description: PGP signature


Re: How many billion rows of data I can store in PostgreSQL RDS.

2019-02-21 Thread Michael Paquier
On Thu, Feb 21, 2019 at 09:14:24PM -0800, Adrian Klaver wrote:
> This would be a question for AWS RDS support.

And this depends also a lot on your schema, your column alignment and
the level of bloat of your relations..
--
Michael


signature.asc
Description: PGP signature