Re: Active connections are terminated because of small wal_sender_timeout

2019-07-31 Thread AYahorau
Hello Everyone!

Sorry for being persistent.

> I do not think anybody thinks this is a bug.  Setting wal_sender_timeout
> too small is a configuration mistake.

Why is it a configuration mistake? This value is allowed  to be set. There 
is no any restriction about it.
I would like to ask a question regarding wal_sender_timeout description 
and its real behaviour.
Description:
Terminate replication connections that are inactive longer than the 
specified number of milliseconds.
Real behaviour:
A standby server can be busy and does not send "keepalive" packets to the 
master.
(By the way wal_sender_timeot cannot be not so small as in my tests. This 
situation can happen when wal_sender_timeout=10seconds and so on and and 
so forth).

Does it work properly according to the description?  Don't you see the 
contradiction between the description and real behaviour?  As I mentioned 
before the connection between master and standby is good. There is no any 
problem.
So where is a bug: in the description or in the implementation?

> Yeah.  I don't see any bug here.  Please note that it can be also a
> problem to set up a too high value in some configuration setups.  The
> lack of flexibility in this area is why wal_sender_timeout has been
> switch to be user-settable in v12.  In short you can configure it in
> the connection string to enforce a custom value per standby.

Will a small value of  wal_sender_timeout in PostgreSQL v12  lead to the 
same failure "terminating walsender process due to replication timeout" as 
we observe in v11 ?

Best regards, 
Andrei Yahorau

pgaudit.log_parameter

2019-07-31 Thread Luca Ferrari
Hello,
I'm a little confused about the setting pgaudit.log_parameter of the
pgaudit extension
(https://github.com/pgaudit/pgaudit/blob/master/README.md).
What's the purpose of this? AN example of query that will trigger such
parameter logging? Apparently I cannot get it providing me more
information than ''.

Thanks,
Luca




Oracle to postgres migration via ora2pg (blob data)

2019-07-31 Thread Daulat Ram
Hello team,

We have to migrate a schema from oracle to postgres but there is one table that 
is having following large lob segments.  This table is taking time to export. 
What parameters we have to set in ora2pg.conf to speed up the data export by 
ora2pg.

Table:CLIENT_DB_AUDIT_LOG

LOBSEGMENT   SYS_LOB095961C8$$  80.26
LOBSEGMENT   SYS_LOB095961C7$$  79.96
LOBSEGMENT   SYS_LOB094338C8$$   8.84
LOBSEGMENT   SYS_LOB084338C7$$   8.71
LOBSEGMENT   SYS_LOB085961C9$$   5.32

VM Details are:

RAM  8GB
VCPUs 2 VCPU
Disk 40GB

Thanks,



SV: Oracle to postgres migration via ora2pg (blob data)

2019-07-31 Thread Niels Jespersen
I would look at the source table in Oracle first. It looks a lot like audit 
data. Perhaps all content is not needed in Postgres. If it is, then the table 
and lobs may benefit from being reorganised in oracle.

Alter table CLIENT_DB_AUDIT_LOG move;
Alter table CLIENT_DB_AUDIT_LOG  move lob (SYS_LOB095961C8$$);
-- Three more of these.

The syntax is from my the back of my head. You may need to look the details up.

Niels


Fra: Daulat Ram 
Sendt: 31. juli 2019 13:32
Til: pgsql-general@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Emne: Oracle to postgres migration via ora2pg (blob data)

Hello team,

We have to migrate a schema from oracle to postgres but there is one table that 
is having following large lob segments.  This table is taking time to export. 
What parameters we have to set in ora2pg.conf to speed up the data export by 
ora2pg.

Table:CLIENT_DB_AUDIT_LOG

LOBSEGMENT   SYS_LOB095961C8$$  80.26
LOBSEGMENT   SYS_LOB095961C7$$  79.96
LOBSEGMENT   SYS_LOB094338C8$$   8.84
LOBSEGMENT   SYS_LOB084338C7$$   8.71
LOBSEGMENT   SYS_LOB085961C9$$   5.32

VM Details are:

RAM  8GB
VCPUs 2 VCPU
Disk 40GB

Thanks,



Re: pgaudit.log_parameter

2019-07-31 Thread Artur Zakirov

Hello,

On 31.07.2019 14:21, Luca Ferrari wrote:

Hello,
I'm a little confused about the setting pgaudit.log_parameter of the
pgaudit extension
(https://github.com/pgaudit/pgaudit/blob/master/README.md).
What's the purpose of this? AN example of query that will trigger such
parameter logging? Apparently I cannot get it providing me more
information than ''.


pgaudit.log_parameter allows to log parameters of prepared statements. 
See the documentation:


https://www.postgresql.org/docs/current/sql-prepare.html

The following example logs parameters if pgaudit.log_parameter is on:

EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);

It should log parameters: 1, 'Hunter Valley', 't', 200.00

--
Artur




Re: pgaudit.log_parameter

2019-07-31 Thread Arthur Zakirov

Hello,

On 31.07.2019 14:21, Luca Ferrari wrote:

Hello,
I'm a little confused about the setting pgaudit.log_parameter of the
pgaudit extension
(https://github.com/pgaudit/pgaudit/blob/master/README.md).
What's the purpose of this? AN example of query that will trigger such
parameter logging? Apparently I cannot get it providing me more
information than ''.
pgaudit.log_parameter allows to log parameters of prepared statements. 
See the documentation:


https://www.postgresql.org/docs/current/sql-prepare.html

The following example logs parameters if pgaudit.log_parameter is on:

EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);

It should log parameters: 1, 'Hunter Valley', 't', 200.00

--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company




Re: Active connections are terminated because of small wal_sender_timeout

2019-07-31 Thread Tom Lane
ayaho...@ibagroup.eu writes:
>> I do not think anybody thinks this is a bug.  Setting wal_sender_timeout
>> too small is a configuration mistake.

> Why is it a configuration mistake? This value is allowed  to be set. There 
> is no any restriction about it.

The fact that a parameter can be set does not mean that we guarantee that
every possible value will work for everybody.  As an example, if you
configure work_mem to a few TB and then start getting OOM failures because
your machine can't actually support that, it's not a bug that we let you
set the value that high.  The upper limit of what's usable is too variable
and too hard to determine, so we don't even try; it's on you to choose a
suitable setting for your situation.  Similarly, the useful range of
wal_sender_timeout is hard to predict and is likely to be quite different
for different installations.  We don't want to artificially constrain what
people can use, so the range of allowed settings *ought* to include some
values that are not practically useful in specific situations.  Those
values might be just the right ones for someone else.

regards, tom lane




Which version to upgrade upto

2019-07-31 Thread Vikas Sharma
Hi All,

We are using postgres 9.5.9 in streaming replication with repmgr.  The
project is now considering to update postgreSQL instances to latest
versions.

I am looking for which version to upgrade to. I can see the current version
in postgres 11 is 11.4, and 10.9 in 10.

How to decide on which version we should use?

Should I go for 10.9 or 11.2? The architects are suggesting 11.2

Please advise.

Regards
Vikas


Re: Which version to upgrade upto

2019-07-31 Thread Luca Ferrari
On Wed, Jul 31, 2019 at 4:55 PM Vikas Sharma  wrote:
> Should I go for 10.9 or 11.2? The architects are suggesting 11.2

Moving fom 9.5 requires in any case a major version upgrade, therefore
I would go for the latest one, 11.4.
Are there any particular needs that feed your doubts about the version?

Luca




Re: Which version to upgrade upto

2019-07-31 Thread Vikas Sharma
The architects and developers have perception that the latest release
always will have bugs and others might be using in production. They feel
11.2 will be better bet than 11.4.



On Wed, Jul 31, 2019, 16:24 Luca Ferrari  wrote:

> On Wed, Jul 31, 2019 at 4:55 PM Vikas Sharma  wrote:
> > Should I go for 10.9 or 11.2? The architects are suggesting 11.2
>
> Moving fom 9.5 requires in any case a major version upgrade, therefore
> I would go for the latest one, 11.4.
> Are there any particular needs that feed your doubts about the version?
>
> Luca
>


Re: Which version to upgrade upto

2019-07-31 Thread Rob Sargent



On 7/31/19 9:57 AM, Vikas Sharma wrote:
The architects and developers have perception that the latest release 
always will have bugs and others might be using in production. They 
feel 11.2 will be better bet than 11.4.




Except of course for the bugs fixed in .3 and .4.




Re: Which version to upgrade upto

2019-07-31 Thread David G. Johnston
On Wed, Jul 31, 2019 at 8:58 AM Vikas Sharma  wrote:

> The architects and developers have perception that the latest release
> always will have bugs and others might be using in production. They feel
> 11.2 will be better bet than 11.4.
>

Beginning with version 10 the second position in the version number became
the patch release version so both 11.2 and 11.4 are the same major version
(v11) and 11.2 is guaranteed to be more buggy than 11.4.  If you wish to be
cautious then you should upgrade to the v10 series which is presently at
10.9

David J.


Re: Oracle to postgres migration via ora2pg (blob data)

2019-07-31 Thread Amol Tarte
FullConvert does this job much faster than ora2pg


With Warm Regards,
Amol Tarte,
Project Lead,
Rajdeep InfoTechno Pvt. Ltd.
Visit us at http://it.rajdeepgroup.com

On Wed 31 Jul, 2019, 5:16 PM Niels Jespersen,  wrote:

> I would look at the source table in Oracle first. It looks a lot like
> audit data. Perhaps all content is not needed in Postgres. If it is, then
> the table and lobs may benefit from being reorganised in oracle.
>
>
>
> Alter table CLIENT_DB_AUDIT_LOG move;
>
> Alter table CLIENT_DB_AUDIT_LOG  move lob (SYS_LOB095961C8$$);
>
> -- Three more of these.
>
>
>
> The syntax is from my the back of my head. You may need to look the
> details up.
>
>
>
> Niels
>
>
>
>
>
> *Fra:* Daulat Ram 
> *Sendt:* 31. juli 2019 13:32
> *Til:* pgsql-general@lists.postgresql.org;
> pgsql-performa...@lists.postgresql.org
> *Emne:* Oracle to postgres migration via ora2pg (blob data)
>
>
>
> Hello team,
>
>
>
> We have to migrate a schema from oracle to postgres but there is one table
> that is having following large lob segments.  This table is taking time to
> export. What parameters we have to set in ora2pg.conf to speed up the data
> export by ora2pg.
>
>
>
> Table:CLIENT_DB_AUDIT_LOG
>
>
>
> LOBSEGMENT   SYS_LOB095961C8$$  80.26
>
> LOBSEGMENT   SYS_LOB095961C7$$  79.96
>
> LOBSEGMENT   SYS_LOB094338C8$$   8.84
>
> LOBSEGMENT   SYS_LOB084338C7$$   8.71
>
> LOBSEGMENT   SYS_LOB085961C9$$   5.32
>
>
>
> VM Details are:
>
>
>
> RAM  8GB
>
> VCPUs 2 VCPU
>
> Disk 40GB
>
>
>
> Thanks,
>
>
>


Re: Which version to upgrade upto

2019-07-31 Thread Thomas Kellerer

Vikas Sharma schrieb am 31.07.2019 um 17:57:

The architects and developers have perception that the latest release
always will have bugs and others might be using in production. They
feel 11.2 will be better bet than 11.4.



You should always use the latest minor version, so 11.4 is preferred over 11.2

Quote from the homepage[1]


For minor releases, the community considers not upgrading to be riskier than 
upgrading


So definitely go with 11.4

If you want to see how many (and which) bugs have been fixed between 11.2 and 
11.4 you can check:

   https://why-upgrade.depesz.com/show?from=11.2&to=11.4&keywords=

Thomas


[1] https://www.postgresql.org/support/versioning/




Re: Which version to upgrade upto

2019-07-31 Thread Tom Lane
Vikas Sharma  writes:
> The architects and developers have perception that the latest release
> always will have bugs and others might be using in production. They feel
> 11.2 will be better bet than 11.4.

Your architects are apparently completely unfamiliar with Postgres.
Tell them to read
https://www.postgresql.org/support/versioning/

particularly the bit about

While upgrading will always contain some level of risk, PostgreSQL
minor releases fix only frequently-encountered bugs, security issues,
and data corruption problems to reduce the risk associated with
upgrading. For minor releases, the community considers not upgrading
to be riskier than upgrading.

It is true that we've sometimes accidentally introduced regressions
into minor releases --- we're all mortal.  But to focus on that case
and ignore all the genuine bug fixes in each minor release is flat out
folly.

regards, tom lane




DDL and DML in a transaction

2019-07-31 Thread Igal @ Lucee.org

I am trying to change a text column into a numeric one in a large table.

My idea was to add a new column, update it, drop the old column, and 
rename the new one to the old name.  I am hoping that that would make it 
faster and minimize locking time though I'm not sure that it would.


I am therefore trying to execute the following but I'm getting an error 
that the new column does not exist:


begin;
    alter table some_table
        add column if not exists amount_num numeric(30,12);

    update some_table
        set amount_num = amount_text::numeric(30,12);

    alter table some_table
        drop column amount_text;

    alter table some_table
        rename column amount_num to amount_text;

    alter table some_table
        drop column amount_num;

    commit;
end;

Am I missing something?  Is this supposed to work?

Would it have less locking than simply altering the column?

Thanks,

Igal






Re: DDL and DML in a transaction

2019-07-31 Thread David G. Johnston
On Wed, Jul 31, 2019 at 11:38 AM Igal @ Lucee.org  wrote:

>  alter table some_table
>  rename column amount_num to amount_text;
>
>  alter table some_table
>  drop column amount_num;
>

You just renamed amount_num to amount_text so I'm not sure why you expect
the drop to succeed.

Would it have less locking than simply altering the column?
>

I doubt anything will improve upon simply altering the column.  You have to
perform a full table rewrite in either case which is going to be the main
resource consumer.

David J.


Re: DDL and DML in a transaction

2019-07-31 Thread Igal @ Lucee.org

Thank you, David.

I should get more sleep...

Igal

On 7/31/2019 11:52 AM, David G. Johnston wrote:

On Wed, Jul 31, 2019 at 11:38 AM Igal @ Lucee.org > wrote:


     alter table some_table
     rename column amount_num to amount_text;

 alter table some_table
     drop column amount_num;


You just renamed amount_num to amount_text so I'm not sure why you 
expect the drop to succeed.


Would it have less locking than simply altering the column?


I doubt anything will improve upon simply altering the column.  You 
have to perform a full table rewrite in either case which is going to 
be the main resource consumer.


David J.


Re: How do I create a Backup Operator account ?

2019-07-31 Thread Peter J. Holzer
On 2019-07-31 07:48:36 +0200, Luca Ferrari wrote:
> On Wed, Jul 31, 2019 at 2:48 AM Marcos Aurelio Nobre
>  wrote:
> > But I don't know how to implement this on Linux, nor how to write this 
> > entry in the pg_hba.conf file.
> 
> I would start with an entry in pg_hba.conf like the following:
> 
> hostall   pg_backup_usernamelocalhost   md5
> 
> or
> 
> hostall   pg_backup_usernamelocalhost   md5
> 
> 
> The problem then comes on how to prevent the operating system user to
> run psql.

This problem can be solved by not granting anyone shell access as that
user. It is only used as a target for sudo, and sudo is configured to
run only pg_dump and pg_restore as that user (plus maybe other programs
to list available backups, review logs, remove old backups, ...)

A web interface might be used as an alternative to sudo.

> Even something like the following (untested) in /etc/sudoers will NOT
> prevent the user to access the database:
> 
> 
> User_Alias PGBACKUPUSERS = pg_backup_username
> Cmd_Alias PGBACKUP = /usr/local/bin/pg_dump,
> /usr/local/bin/pg_restore, ! /usr/local/bin/psql
> PGBACKUPUSERS backup_host = PGBACKUP

This is the wrong way around. It should be something like

alice, bob = (pg_backup_username) /usr/local/bin/pg_dump

(Apologies if I didn't get the syntax right. Slogging through the sudoes
manual reminded me why I wrote xssd 15 years ago).

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Oracle to postgres migration via ora2pg (blob data)

2019-07-31 Thread Gilles Darold
Le 31/07/2019 à 18:02, Amol Tarte a écrit :
> FullConvert does this job much faster than ora2pg
>
>
> With Warm Regards,
> Amol Tarte,
> Project Lead,
> Rajdeep InfoTechno Pvt. Ltd.
> Visit us at http://it.rajdeepgroup.com
>
> On Wed 31 Jul, 2019, 5:16 PM Niels Jespersen,  > wrote:
>
> I would look at the source table in Oracle first. It looks a lot
> like audit data. Perhaps all content is not needed in Postgres. If
> it is, then the table and lobs may benefit from being reorganised
> in oracle.
>
>  
>
> Alter table CLIENT_DB_AUDIT_LOG move;
>
> Alter table CLIENT_DB_AUDIT_LOG  move lob
> (SYS_LOB095961C8$$);
>
> -- Three more of these.
>
>  
>
> The syntax is from my the back of my head. You may need to look
> the details up.
>
>  
>
> Niels
>
>  
>
>  
>
> *Fra:* Daulat Ram  >
> *Sendt:* 31. juli 2019 13:32
> *Til:* pgsql-general@lists.postgresql.org
> ;
> pgsql-performa...@lists.postgresql.org
> 
> *Emne:* Oracle to postgres migration via ora2pg (blob data)
>
>  
>
> Hello team,
>
>  
>
> We have to migrate a schema from oracle to postgres but there is
> one table that is having following large lob segments.  This table
> is taking time to export. What parameters we have to set in
> ora2pg.conf to speed up the data export by ora2pg.
>
>  
>
> Table:    CLIENT_DB_AUDIT_LOG
>
>  
>
> LOBSEGMENT   SYS_LOB095961C8$$  80.26
>
> LOBSEGMENT   SYS_LOB095961C7$$  79.96
>
> LOBSEGMENT   SYS_LOB094338C8$$   8.84
>
> LOBSEGMENT   SYS_LOB084338C7$$   8.71
>
> LOBSEGMENT   SYS_LOB085961C9$$   5.32
>
>  
>
> VM Details are:
>
>  
>
> RAM  8GB
>
> VCPUs 2 VCPU
>
> Disk 40GB
>
>  
>
> Thanks,
>

Hi,

Before using impressive commercial products you can try some additional
configuration with Ora2pg.

The only solution to improve data migration performances is to use
parallelism. The problem with BLOB is that they have to be converted
into hex to be inserted into a bytea. The internal function in Ora2Pg
that responsible of this job is _escape_lob(). The other problem is that
Oracle is very slow to send the BLOB to Ora2Pg, I don't know if
commercial products are better at this point but I have challenged
Ora2Pg with Kettle some years ago without do much differences. So what
you should do first is to set the following in your ora2pg.conf:

NO_LOB_LOCATOR    1
LONGREADLEN        1


This will force Oracle to send the full content of the BLOB in a single
pass otherwise it will use small chunks. The only drawback is that you
have to set LONGREADLEN to the highest BLOB size in your table to not
throw and LONGTRUNC error.

That also mean that for each ROW returned DBD::Oracle (Perl driver for
Oracle) will allocate at least LONGREADLEN in memory and we want to
extract as much rows as possible. You have understood that your 8GB of
memory will limit the quantity of rows that can be exported at the same
time.

The other point is that Oracle is slow so you have to parallelize data
export. Use -J 4 at command line to create 4 simultaneous process to
data export. Parallelization on Oracle side is only possible if you have
a numeric column that can be used to split the data using modulo 4 in
this case. This is a basic implementation but it is enough in most cases.

Converting BLOB to Bytea consume lot of cpu cycle too so it is a good
practice to parallelize this work too. Use -j 2 or -j 3 for this work.
The number of parallelisation process should be tested because there is
a limit where you will not win anything.

If you have, let's say 32GB of memory and 12 cpu you could try a command
like :

    ora2pg -c ora2pg.conf -J 4 -j 3 -t CLIENT_DB_AUDIT_LOG -L 500

If you have less resources don't forget that -J and -j must be
multiplied to have the number of process that Ora2Pg will parallelize.
The -L option (DATA_LIMIT) is used to reduce the number of row extracted
at a time. Here with a value of 500 it will process 50 (DATA_LIMIT/10)
rows with BLOB at a time. If the table do not have any BLOB it will use
500 row at a time. For most tables this parameter should be set to 1
up to 25. If you have lot of memory the value can be higher. If you
think it is too low you can set BLOB_LIMIT in ora2pg.conf to set it at a
higher value.

However Ora2Pg will show you the data migration speed so you can adjust
all these parameters to see if you have some performances gains. If you
want to know exactly at which speed Oracle is able to send the data add
--oracle_speed to the ora2pg command. Ora2Pg will only extract data from
Oracle, there will be no bytea transformation or data

adding more space to the existing server

2019-07-31 Thread Julie Nishimura
Hello postgres folks,

We're tossing around the idea of upgrading a replicated postgres cluster (37 
dbs) by breaking the replication, adding different size (larger) data disks to 
the hot-spare, then turning replication back on, letting it fully populate, 
then breaking replication, making the standby the primary, upgrade the disks on 
the other system, bring it back up, replicate backwards until fully replicated 
then failing-back to the original primary. Is this feasible?

Our current size is 22 tb, and it is 97% full
(PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.3.1-14ubuntu2) 5.3.1 20160413, 64-bit)


Thank you for your suggestions


Re: adding more space to the existing server

2019-07-31 Thread Ron

On 7/31/19 5:21 PM, Julie Nishimura wrote:

Hello postgres folks,

We're tossing around the idea of upgrading a replicated postgres cluster 
(37 dbs) by breaking the replication, adding different size (larger) data 
disks to the hot-spare, then turning replication back on, letting it fully 
populate, then breaking replication, making the standby the primary, 
upgrade the disks on the other system, bring it back up, replicate 
backwards until fully replicated then failing-back to the original 
primary. Is this feasible?


Our current size is 22 tb, and it is 97% full
(PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.3.1-14ubuntu2) 5.3.1 20160413, 64-bit)



Thank you for your suggestions


Can we correctly infer that you aren't using LVM?  (Or in there no more room 
on the rack/controller for new drives?)


--
Angular momentum makes the world go 'round.