Looking for Postgres upgrade Metrix

2019-07-17 Thread Perumal Raj
Hi Team,

Do we have any reference link which explain various  upgrade path ( Direct
/ indirect) by using pg_upgrade or latest utility.

I hope pg_dump can be used from any lower version to Higher version. Please
correct me if I am wrong.

Thanks,
Raj


Re: Looking for Postgres upgrade Metrix

2019-07-17 Thread Perumal Raj
Thanks Adrian, David,

Basically , i want to upgrade few 9.X/8.X version DBs  to some stable
version ( 10.X / 11.X ), At the same time with less down time.
So want to understand whether direct upgrade possible or not between major
releases .

Thanks,

On Wed, Jul 17, 2019 at 11:24 AM Adrian Klaver 
wrote:

> On 7/17/19 11:15 AM, Perumal Raj wrote:
> > Hi Team,
> >
> > Do we have any reference link which explain various  upgrade path (
> > Direct / indirect) by using pg_upgrade or latest utility.
>
> https://www.postgresql.org/docs/11/backup.html
>
> What exactly are you trying to do?
>
> >
> > I hope pg_dump can be used from any lower version to Higher version.
>
> As long as you use the newer pg_dump to dump the older version.
>
> > Please correct me if I am wrong.
>
>
> >
> > Thanks,
> > Raj
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Resolved: Looking for Postgres upgrade Metrix

2019-07-17 Thread Perumal Raj
Ok, thanks for the clarification.

On Wed, Jul 17, 2019 at 11:46 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Jul 17, 2019 at 11:39 AM Perumal Raj  wrote:
>
>> Thanks Adrian, David,
>>
>> Basically , i want to upgrade few 9.X/8.X version DBs  to some stable
>> version ( 10.X / 11.X ), At the same time with less down time.
>> So want to understand whether direct upgrade possible or not between
>> major releases .
>>
>
> From the pg_upgrade documentation:
>
> "pg_upgrade supports upgrades from 8.4.X and later to the current major
> release of PostgreSQL, including snapshot and beta releases."
>
> You demonstrated knowledge of the two relevant programs that can be used
> to upgrade and their documentation explicitly states their minimum version
> limit so I'm not understanding why there is a question.  pg_upgrade is the
> better option for upgrading.
>
> David J.
>
>


pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi Team,

Please give me some pointers to resolve this issue.

-bash-4.2$ export OLDCLUSTER=/usr/pgsql-9.2
-bash-4.2$ export NEWCLUSTER=/usr/pgsql-11

-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433 --check
Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
Is the server running locally and accepting
connections on Unix domain socket "/var/lib/pgsql/.s.PGSQL.5432"?

could not connect to source postmaster started with the command:
"/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start
Failure, exiting
-bash-4.2$


+++

-bash-4.2$ /usr/pgsql-9.2/bin/pg_ctl -D "/data/db/data" start -l logfile
server starting
-bash-4.2$

+++
-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433  --check
Performing Consistency Checks on Old Live Server

Checking cluster versions   ok
Checking database user is the install user  ok
Checking database connection settings   ok
Checking for prepared transactions  ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch   ok
Checking for invalid "unknown" user columns ok
Checking for hash indexes   ok
Checking for roles starting with "pg_"  ok
Checking for incompatible "line" data type  ok
Checking for presence of required libraries ok
Checking database user is the install user  ok
Checking for prepared transactions  ok

*Clusters are compatible*
-bash-4.2$

+++

-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433

There seems to be a postmaster servicing the old cluster.
Please shutdown that postmaster and try again.
Failure, exiting
-bash-4.2$

-bash-4.2$ /usr/pgsql-9.2/bin/pg_ctl -D "/data/db/data" stop -l
logfilewaiting for server to shut down done
server stopped
-bash-4.2$


-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433
Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
Is the server running locally and accepting
connections on Unix domain socket "/var/lib/pgsql/.s.PGSQL.5432"?

could not connect to source postmaster started with the command:
"/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start
Failure, exiting
-bash-4.2$


Thanks,


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi Luca

-bash-4.2$ "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start
waiting for server to start stopped waiting
pg_ctl: could not start server
Examine the log output.
-bash-4.2$

-bash-4.2$ pwd
/var/lib/pgsql
-bash-4.2$

upgrade log :


command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start >> "pg_upgrade_server.log" 2>&1
waiting for server to startFATAL:  unrecognized configuration parameter
"unix_socket_directory"
 stopped waiting
pg_ctl: could not start server
Examine the log output.


On Tue, Jul 23, 2019 at 7:15 AM Luca Ferrari  wrote:

> On Tue, Jul 23, 2019 at 3:56 PM Perumal Raj  wrote:
> > could not connect to source postmaster started with the command:
> > "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
> unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
> start
> > Failure, exiting
>
> Is /var/lib/pgsql directory on the system?
> Can you start the instance with the above command?
>
>
> Luca
>


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi All,

Yes, , bin file correct only as it came up with CentOS 7.6 .

Anyhow , thanks for the pointers which helped me to look at hacking
solution :-)

Currently , upgrade is running. will keep u posted with results.

Thanks,

On Tue, Jul 23, 2019 at 7:43 AM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 7/23/19 7:17 AM, Perumal Raj wrote:
> >> command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
> >> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
> >> unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
> >> start >> "pg_upgrade_server.log" 2>&1
> >> waiting for server to startFATAL:  unrecognized configuration
> >> parameter "unix_socket_directory"
>
> > Well unix_socket_directory changed to  unix_socket_directories in 9.3.
>
> Yeah, this is clearly a version-skew problem.  pg_upgrade knows it
> should say unix_socket_directory not unix_socket_directories to a pre-9.3
> server, but that's going wrong somehow.
>
> > Are you sure that /usr/pgsql-9.2/bin/pg_ctl is really pointing to a 9.2
> > binary?
>
> For quite some time, Red Hat shipped versions of 9.2.x that were patched
> to understand unix_socket_directories not unix_socket_directory.  I would
> be suspicious that the source server was one of those, except that the
> cited path doesn't match where the Red Hat RPMs put it.
>
> regards, tom lane
>


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi All,

Finally upgrade completed successfully after implementing the following
Workaround.

mv /usr/bin/pg_ctl{,-orig}
echo '#!/bin/bash' > /usr/bin/pg_ctl
echo '"$0"-orig "${@/unix_socket_directory/unix_socket_directories}"' >>
 /usr/bin/pg_ctl
chmod +x /usr/bin/pg_ctl
*Special thanks to ''Ziggy Crueltyfree Zeitgeister '*



On Tue, Jul 23, 2019 at 7:51 AM Perumal Raj  wrote:

> Hi All,
>
> Yes, , bin file correct only as it came up with CentOS 7.6 .
>
> Anyhow , thanks for the pointers which helped me to look at hacking
> solution :-)
>
> Currently , upgrade is running. will keep u posted with results.
>
> Thanks,
>
> On Tue, Jul 23, 2019 at 7:43 AM Tom Lane  wrote:
>
>> Adrian Klaver  writes:
>> > On 7/23/19 7:17 AM, Perumal Raj wrote:
>> >> command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
>> >> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
>> >> unix_socket_permissions=0700 -c
>> unix_socket_directory='/var/lib/pgsql'"
>> >> start >> "pg_upgrade_server.log" 2>&1
>> >> waiting for server to startFATAL:  unrecognized configuration
>> >> parameter "unix_socket_directory"
>>
>> > Well unix_socket_directory changed to  unix_socket_directories in 9.3.
>>
>> Yeah, this is clearly a version-skew problem.  pg_upgrade knows it
>> should say unix_socket_directory not unix_socket_directories to a pre-9.3
>> server, but that's going wrong somehow.
>>
>> > Are you sure that /usr/pgsql-9.2/bin/pg_ctl is really pointing to a 9.2
>> > binary?
>>
>> For quite some time, Red Hat shipped versions of 9.2.x that were patched
>> to understand unix_socket_directories not unix_socket_directory.  I would
>> be suspicious that the source server was one of those, except that the
>> cited path doesn't match where the Red Hat RPMs put it.
>>
>> regards, tom lane
>>
>


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Sorry i missed to refer link.
https://dba.stackexchange.com/questions/50135/pg-upgrade-unrecognized-configuration-parameter-unix-socket-directory


On Tue, Jul 23, 2019 at 9:20 AM Tom Lane  wrote:

> Perumal Raj  writes:
> > Finally upgrade completed successfully after implementing the following
> > Workaround.
>
> You didn't say where you got these executables from, but if the 9.2 and 11
> packages were from the same packager, you should complain to them about
> it.  If they're patching 9.2 to have unix_socket_directories, they should
> also patch pg_upgrade in later versions to understand that.
>
> regards, tom lane
>


Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Perumal Raj
Hi All,

We have recently upgraded postgres from 9.2 to 11.6 and started seeing
performance issue immediately and  able to fix the performance issue after
disabling parameter: enable_seqscan.

Question :
Should i keep the above parameter always disabled ? If not why the behavior
changed in Higher version ?

Note:
Table ANALYZE completed as part of Upgrade activity.

Thanks
Raj


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Perumal Raj
Hi Tom /Adrian,

Issue is not specific to a table or particular Query. Also there is no
change in DB parameter after upgrade.

That the only way i can make it most of the the query to run as like before
upgrade.

Note:
Some web reference says , Engine will take some time to adjust until it
runs autovacuum .




On Wed, Jan 29, 2020 at 10:22 AM Tom Lane  wrote:

> Perumal Raj  writes:
> > We have recently upgraded postgres from 9.2 to 11.6 and started seeing
> > performance issue immediately and  able to fix the performance issue
> after
> > disabling parameter: enable_seqscan.
> > Question :
> > Should i keep the above parameter always disabled ? If not why the
> behavior
> > changed in Higher version ?
>
> This is unanswerable with the amount of information you've given.
> Yes, turning off enable_seqscan is a bad idea in general, but why
> you got a worse plan without that requires details.
>
> https://wiki.postgresql.org/wiki/Slow_Query_Questions
>
> regards, tom lane
>


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Perumal Raj
Hi Peter,

I strongly i agree,

I have used pg_upgrade which runs "analyze_new_cluster.sh" as post upgrade
activity across all DB on cluster.
Also, I have executed manual vacuum on all individual tables. However the
behavior is same until i disable the above said parameter.

Regards,
Raj


On Wed, Jan 29, 2020 at 2:33 PM Peter J. Holzer  wrote:

> On 2020-01-29 09:39:03 -0800, Perumal Raj wrote:
> > We have recently upgraded postgres from 9.2 to 11.6 and started seeing
> > performance issue immediately and  able to fix the performance issue
> after
> > disabling parameter: enable_seqscan.
>
> How did you upgrade?
>
> If your upgrade involved a dump and restore, you should invoke ANALYZE
> for each database (I think autovacuum will analyze all tables
> eventually, but takes its time).
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-03 Thread Perumal Raj
Thanks.Will decently try that option and keep you posted.

Thanks again for redirecting to right group.


Perumal Raju

On Sun, Mar 3, 2019, 6:51 AM Justin Pryzby  wrote:

> Moving to -general list (-hackers is for development topics like proposed
> patches and patch reviews and beta testing and crash reports).
>
> On Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:
> > could not load library "$libdir/pg_reorg":
> > ERROR:  could not access file "$libdir/pg_reorg": No such file or
> directory
>
> As Sergei said, you can run pg_dump -s and look for references to reorg,
> and
> drop them.
>
> Or, you could try this:
> CREATE EXTENSION pg_reorg FROM unpackaged;
>
> Or maybe this:
> CREATE EXTENSION pg_repack FROM unpackaged;
>
> If that works, you can DROP EXTENSION pg_repack;
>
> Otherwise, I think you can maybe do something like:
> DROP SCHEMA pg_repack CASCADE; -- or,
> DROP SCHEMA pg_reorg CASCADE;
>
> Please send output of: \dn
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Perumal Raj
Hi Justin

I could see bunch of functions under reorg schema.

AS '$libdir/pg_reorg', 'reorg_disable_autovacuum';
AS '$libdir/pg_reorg', 'reorg_get_index_keys';
AS '$libdir/pg_reorg', 'reorg_apply';
AS '$libdir/pg_reorg', 'reorg_drop';
AS '$libdir/pg_reorg', 'reorg_indexdef';
AS '$libdir/pg_reorg', 'reorg_swap';
AS '$libdir/pg_reorg', 'reorg_trigger';
AS '$libdir/pg_reorg', 'reorg_version';

I am not sure about the impact of these functions if i drop .

Are these functions seeded ( default) one ?

Regards,
Raj


On Sun, Mar 3, 2019 at 7:38 PM Perumal Raj  wrote:

> Thanks.Will decently try that option and keep you posted.
>
> Thanks again for redirecting to right group.
>
>
> Perumal Raju
>
> On Sun, Mar 3, 2019, 6:51 AM Justin Pryzby  wrote:
>
>> Moving to -general list (-hackers is for development topics like proposed
>> patches and patch reviews and beta testing and crash reports).
>>
>> On Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:
>> > could not load library "$libdir/pg_reorg":
>> > ERROR:  could not access file "$libdir/pg_reorg": No such file or
>> directory
>>
>> As Sergei said, you can run pg_dump -s and look for references to reorg,
>> and
>> drop them.
>>
>> Or, you could try this:
>> CREATE EXTENSION pg_reorg FROM unpackaged;
>>
>> Or maybe this:
>> CREATE EXTENSION pg_repack FROM unpackaged;
>>
>> If that works, you can DROP EXTENSION pg_repack;
>>
>> Otherwise, I think you can maybe do something like:
>> DROP SCHEMA pg_repack CASCADE; -- or,
>> DROP SCHEMA pg_reorg CASCADE;
>>
>> Please send output of: \dn
>>
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Perumal Raj
Hi Justin

Does it mean that these functions are default and came with 9.2 ?
I am wondering how these functions are created in the DB as the
library($libdir/pg_reorg)  is not exists in system

Note:
My schema name is reorg not pg_reorg




On Mon, Mar 4, 2019 at 1:45 PM Justin Pryzby  wrote:

> On Mon, Mar 04, 2019 at 01:37:30PM -0800, Perumal Raj wrote:
> > I could see bunch of functions under reorg schema.
>
> Those functions are the ones preventing you from upgrading.
> You should drop schema pg_reorg cascade.
> You can run it in a transaction first to see what it will drop.
> But after the upgrade, you can CREATE EXTENSION pg_repack, which is a fork
> of
> pg_reorg, which is itself no longer maintained.
>
> Justin
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-05 Thread Perumal Raj
Thanks Sergei/Justin for the continues update.

So reorg Schema might be created as part of some scripts prior to 9.2
Version ?
These are the functions in DB not the Extension. However these functions
will not run as the associated libraries are not exists in System now (9.2)
and I hope no impact to system.

AS '$libdir/pg_reorg', 'reorg_disable_autovacuum';
AS '$libdir/pg_reorg', 'reorg_get_index_keys';
AS '$libdir/pg_reorg', 'reorg_apply';
AS '$libdir/pg_reorg', 'reorg_drop';
AS '$libdir/pg_reorg', 'reorg_indexdef';
AS '$libdir/pg_reorg', 'reorg_swap';
AS '$libdir/pg_reorg', 'reorg_trigger';
AS '$libdir/pg_reorg', 'reorg_version';

Will continue 9.6 upgrade after dropping reorg schema.

One Question need your address,

Prior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed
similar error(you can refer beginning o the post ).

> could not load library "$libdir/hstore": ERROR:  could not access file
"$libdir/hstore": No such file or directory
> could not load library "$libdir/adminpack": ERROR:  could not access file
"$libdir/adminpack": No such file or directory
> could not load library "$libdir/uuid-ossp": ERROR:  could not access file
"$libdir/uuid-ossp": No such file or directory

These Extension seems to be standard. What is the use of these function and
do we have any alternative in Higher version or Enhanced object if i drop
it in 9.2 and continue upgrade to 10.7 Version.

Thanks and Regards,

On Mon, Mar 4, 2019 at 11:42 PM Sergei Kornilov  wrote:

> Hi
>
> seems this is unpackaged extension, usually installed prior 9.1 release.
> Maybe reorg even does not support "create extension" syntax. That was long
> ago and project homepage is unavailable now. pg_repack documentation
> mention "support for PostgreSQL 9.2 and EXTENSION packaging" as
> improvements.
>
> > Are these functions seeded ( default) one ?
>
> No its not default.
>
> regards, Sergei
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-06 Thread Perumal Raj
Awesome, thanks Sergei and Justin,

Finally, I am able to upgrade the DB from 9.2 to 9.6 successfully  after
dropping Schema (reorg) without library issue.
Also , I have installed -Contrib. package for Version:10 and upgraded to
version 10.7 too.

On both the cases , I have used  --link option and it took just fraction of
seconds ( I feel 'Zero' Downtime effect )

Any pointers for pg_repack schema creation ?
Will there be any impact in the future , Since i used --link option ?

Regards,
Raju








On Tue, Mar 5, 2019 at 8:21 AM Justin Pryzby  wrote:

> On Tue, Mar 05, 2019 at 08:09:12AM -0800, Perumal Raj wrote:
> > Thanks Sergei/Justin for the continues update.
> >
> > So reorg Schema might be created as part of some scripts prior to 9.2
> > Version ?
>
> I'm guessing they were probably created in 9.2.
>
> > These are the functions in DB not the Extension. However these functions
> > will not run as the associated libraries are not exists in System now
> (9.2)
> > and I hope no impact to system.
>
> I guess someone installed pgreorg, ran its scripts to install its functions
> into the DB, and then removed pgreorg without removing its scripts.
>
> > One Question need your address,
> >
> > Prior to 9.2 to 9.6 upgrade , I have tried 9.2 10.7 upgrade and failed
> > similar error(you can refer beginning o the post ).
> >
> > > could not load library "$libdir/hstore": ERROR:  could not access file
> "$libdir/hstore": No such file or directory
> > > could not load library "$libdir/adminpack": ERROR:  could not access
> file "$libdir/adminpack": No such file or directory
> > > could not load library "$libdir/uuid-ossp": ERROR:  could not access
> file "$libdir/uuid-ossp": No such file or directory
> >
> > These Extension seems to be standard. What is the use of these function
> and
> > do we have any alternative in Higher version or Enhanced object if i drop
> > it in 9.2 and continue upgrade to 10.7 Version.
>
> See Sergei's response:
>
> https://www.postgresql.org/message-id/7164691551378448%40myt3-1179f584969c.qloud-c.yandex.net
>
> You probably want to install this package for the new version (9.6 or 10 or
> 11).
>
> [pryzbyj@TS-DB ~]$ rpm -ql postgresql11-contrib |grep -E
> '(uuid-ossp|adminpack|hstore)\.control'
> /usr/pgsql-11/share/extension/adminpack.control
> /usr/pgsql-11/share/extension/hstore.control
> /usr/pgsql-11/share/extension/uuid-ossp.control
>
> Justin
>


Resolved: Question about pg_upgrade from 9.2 to X.X

2019-03-07 Thread Perumal Raj
Thanks again.

Perumal Raju

On Thu, Mar 7, 2019, 2:32 AM Justin Pryzby  wrote:

> On Wed, Mar 06, 2019 at 09:44:16PM -0800, Perumal Raj wrote:
> > Any pointers for pg_repack schema creation ?
>
> With recent postgres, you should use just: "CREATE EXTENSION pg_repack",
> which
> does all that for you.
>
> > Will there be any impact in the future , Since i used --link option ?
>
> You probably have an old DB directory laying around which is (at least
> partially) hardlinks.  You should remove it .. but be careful to remove the
> correct dir.  My scripts always rename the old dir before running
> pg_upgrade,
> so it's less scary to rm -fr it later.
>
> Justin
>


Autovacuum Transaction Wraparound

2019-03-11 Thread Perumal Raj
Hi Experts

I have noticed in my Database that, there is no regular Vacuum maintenance
happening
So i started Weekly Job across cluster. But still i am seeing gradual
growth on transacation ID.

DB is still using default autovacuum_freeze_min_age &
autovacuum_freeze_table_age.

Question : Since i am running regularly vacuum job ( weekly) and the
Transaction age is gradually growing , What is next once i hit 200M limit (
default ).
Should i increase my default value ? If so any calculation for increase the
value based on my DB transaction growth.

Thanks,
Raj


Re: Autovacuum Transaction Wraparound

2019-03-11 Thread Perumal Raj
Hi Adrian/Joshua

Sorry to mention in the previous thread,

Auto-vacuum is already enabled in the Cluster and its doing the job
perfectly. But only thing manual vacuum scheduled now (weekly Cluster wide)
after noticing 'Transaction Wraparound message during Autovacuum run.

Version : 9.2.24

Query :

SELECT datname, age(datfrozenxid) FROM pg_database
 datname  |age
--+---
 template1| 133492380
 template0| 180987489
 postgres |  93330701
 nagio| 109936658
 arch__old| 109936658
 prod .   | 151621905

Settings :

  name   |  setting  | unit
-+---+--
 autovacuum  | on|
 autovacuum_analyze_scale_factor | 0.05  |
 autovacuum_analyze_threshold| 50|
 autovacuum_freeze_max_age   | 2 |
 autovacuum_max_workers  | 3 |
 autovacuum_naptime  | 60| s
 autovacuum_vacuum_cost_delay| 20| ms
 autovacuum_vacuum_cost_limit| -1|
 autovacuum_vacuum_scale_factor  | 0.2   |
 autovacuum_vacuum_threshold | 50|

 log_autovacuum_min_duration |-1 .   |

Regards,


On Mon, Mar 11, 2019 at 12:07 PM Adrian Klaver 
wrote:

> On 3/11/19 11:51 AM, Perumal Raj wrote:
> > Hi Experts
> >
> > I have noticed in my Database that, there is no regular Vacuum
> > maintenance happening
>
> What Postgres version?
>
> > So i started Weekly Job across cluster. But still i am seeing gradual
> > growth on transacation ID.
>
> What query are you using?
>
> >
> > DB is still using default autovacuum_freeze_min_age &
> > autovacuum_freeze_table_age.
>
> What are the actual settings for?:
>
> https://www.postgresql.org/docs/10/runtime-config-autovacuum.html
>
> >
> > Question : Since i am running regularly vacuum job ( weekly) and the
> > Transaction age is gradually growing , What is next once i hit 200M
> > limit ( default ).
> > Should i increase my default value ? If so any calculation for increase
> > the value based on my DB transaction growth.
> >
> > Thanks,
> > Raj
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Autovacuum Transaction Wraparound

2019-03-11 Thread Perumal Raj
Hi Adrian

What was the full message?

   autovacuum: VACUUM (to prevent wraparound)

Though i am running vacuum manually (nowadays) and autovacuum is running
perfectly once its threshold reaches.

What will happen if my DB reaches 200M transaction age again ? ( Here my
understanding is no dead tuples to cleanup --- I may be missing full
concept , Please correct me if i am wrong) .

What will be impact to DB ( Performance ) During Vacuum freeze ( My
Assumption is autovacuum will run "vacuum freeze" once DB age reached 200M
) ?

When should i consider to increase pg_settings value with respect to
Autovacuum ?

Regards,



On Mon, Mar 11, 2019 at 12:45 PM Adrian Klaver 
wrote:

> On 3/11/19 12:16 PM, Perumal Raj wrote:
> > Hi Adrian/Joshua
> >
> > Sorry to mention in the previous thread,
> >
> > Auto-vacuum is already enabled in the Cluster and its doing the job
> > perfectly. But only thing manual vacuum scheduled now (weekly Cluster
> > wide) after noticing 'Transaction Wraparound message during Autovacuum
> run.
>
> What was the full message?
>
> >
> > Version : 9.2.24
>
> FYI 9.2 is 1 years+ past EOL.
>
> >
> > Query :
> >
> > SELECT datname, age(datfrozenxid) FROM pg_database
> >   datname  |age
> > --+---
> >   template1| 133492380
> >   template0| 180987489
> >   postgres |  93330701
> >   nagio| 109936658
> >   arch__old| 109936658
> >   prod .   | 151621905
>
> So at some point the server will force a VACUUM to freeze ids and
> prevent wraparound before the age gets to your autovacuum_freeze_max_age
> below. That might even have been the message you saw.
>
> >
> > Settings :
> >
> >name   |  setting  | unit
> > -+---+--
> >   autovacuum  | on|
> >   autovacuum_analyze_scale_factor | 0.05  |
> >   autovacuum_analyze_threshold| 50|
> >   autovacuum_freeze_max_age   | 2 |
> >   autovacuum_max_workers  | 3 |
> >   autovacuum_naptime  | 60| s
> >   autovacuum_vacuum_cost_delay| 20| ms
> >   autovacuum_vacuum_cost_limit| -1|
> >   autovacuum_vacuum_scale_factor  | 0.2   |
> >   autovacuum_vacuum_threshold | 50|
> >
> >   log_autovacuum_min_duration |-1 .   |
> >
> > Regards,
> >
> >
> > On Mon, Mar 11, 2019 at 12:07 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 3/11/19 11:51 AM, Perumal Raj wrote:
> >  > Hi Experts
> >  >
> >  > I have noticed in my Database that, there is no regular Vacuum
> >  > maintenance happening
> >
> > What Postgres version?
> >
> >  > So i started Weekly Job across cluster. But still i am seeing
> > gradual
> >  > growth on transacation ID.
> >
> > What query are you using?
> >
> >  >
> >  > DB is still using default autovacuum_freeze_min_age &
> >  > autovacuum_freeze_table_age.
> >
> > What are the actual settings for?:
> >
> > https://www.postgresql.org/docs/10/runtime-config-autovacuum.html
> >
> >  >
> >  > Question : Since i am running regularly vacuum job ( weekly) and
> the
> >  > Transaction age is gradually growing , What is next once i hit
> 200M
> >  > limit ( default ).
> >  > Should i increase my default value ? If so any calculation for
> > increase
> >  > the value based on my DB transaction growth.
> >  >
> >  > Thanks,
> >  > Raj
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: software or hardware RAID?

2019-03-28 Thread Perumal Raj
Hi All

Just would like to know conclusion here ,

What is best RAID method (Software Or Hardware) for Postgres DB and what
level ?

Thanks,
Raj




On Sat, Mar 23, 2019 at 3:12 PM Kenneth Marshall  wrote:

> On Sat, Mar 23, 2019 at 12:09:11PM +, Rory Campbell-Lange wrote:
> > On 17/03/19, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> > > We aren't sure whether to use software MDRaid or a MegaRAID card.
> > >
> > > We're buying some new Postgres servers with
> > >
> > > 2 x 240GB Intel SSD S4610 (RAID1 : system)
> > > 4 x 960GB Intel SSD S4610 (RAID10 : db)
> > >
> > > We'll be using Postgres 11 on Debian.
> > >
> > > The MegaRAID 9271-8i with flash cache protection is available from our
> > > provider. I think they may also have the 9361-8i which is 12Gb/s.
> > >
> > > Our current servers which use the LSI 9261 with SSDs and we don't see
> > > any IO significant load as we are in RAM most of the time and the RAID
> > > card seems to flatten out any IO spikes.
> > >
> > > We use MDRaid elsewhere but we've never used it for our databases
> > > before.
> >
> > Apologies for re-heating this email from last week. I could really do
> with the
> > advice.
> >
> > Has anyone got any general comments on whether software RAID or an LSI
> card
> > is preferable?
> >
> > We will be replicating load on an existing server, which has an LSI 9261
> card.
> > Below is some stats from sar showing a "heavy" period of load on vdisk
> sda
> >
> >   00:00:01  DEV tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz
>  await svctm %util
> >   14:15:01  sda  112.82643.09  14986.24138.53  2.09
>  18.50  0.25  2.86
> >   14:25:01  sda  108.52270.17  15682.94147.01  1.87
>  17.22  0.25  2.73
> >   14:35:01  sda  107.96178.25  14868.52139.37  1.70
>  15.73  0.23  2.53
> >   14:45:01  sda  150.97748.94  16919.69117.03  1.83
>  12.11  0.22  3.28
> >
> > Thanks for any advice.
> > Rory
>
> Hi Rory,
>
> The main reason, in my opinion, to use a HW RAID card is for the NVRAM
> battery backed cache to support writing to traditional spinning disks.
> Since your SSDs have power-loss support, you do not need that and the HW
> RAID controller. For database use, you would almost certainly be using
> RAID 10 and software RAID 10 is extremely performant. I am in the middle
> of setting up a new system with NVMe SSD drives and HW RAID would be a
> terrible bottle-neck and software RAID is really the only realistice
> option.
>
> Regards,
> Ken
>
>


Recommendation to run vacuum FULL in parallel

2019-04-02 Thread Perumal Raj
Hi ALL

We are  planning to reclaim unused space from 9.2 Version postgres Cluster,

Method : VACUUM FULL
DB Size : 500 GB
Expected space to reclaim 150 GB
work_mem : 250 MB
maintenance_work_mem : 20 GB

*Question :*

1. vacuumdb --j option (Parallel) not available for version 9.2.
  How to run vacuum full in parallel ? At present its taking 8Hrs if i run
sequential ( vacuum full verbose;)

2. If we run vacuum full, Do we need to run REINDEX/ANALYZE exclusively ?

3. What is the best way to run VACUUM FULL with less window.

Thanks,
Raj


Re: Recommendation to run vacuum FULL in parallel

2019-04-03 Thread Perumal Raj
Hi All

Thanks for all your valuable  inputs,

Here is some more data,

Though we have 150 GB free space spread across 500 Tables , Every
alternative day DB is growing with 1 GB rate.
Also,We have manual vacuum job scheduled to run weekly basis, So seems to
be space is not reusing all the time ?

So conclude the requirement here , The only way to parallelism is multiple
script. And no need to do REINDEX exclusively.
Question : Do we need to consider  Table dependencies while preparing
script in order to avoid table locks during vacuum full ?

At present Maintenance work memory set to 20 GB.
Question : Do we need to tweak any other parameters ?

Note:
We are planning this activity with Application Downtime only.

Let me know if i missed anything.

Regards,
Raj





On Wed, Apr 3, 2019 at 8:42 AM rihad  wrote:

> > And future updates can reuse it, too (an update is very similar to an
> > insert+delete).
>
>
> Hm, then it's strange our DB takes 6 times as much space compared to
> freshly restored one (only public schema is considered).
>
> > Not if autovacuum has a chance to run between updates.
>
> Ours is run regularly, although we had to tweak it down not to interfere
> with normal database activity, so it takes several hours each run on the
> table. We did that by setting autovacuum_vacuum_scale_factor = 0.05 from
> default 0.2.
>
>
>


Re: Recommendation to run vacuum FULL in parallel

2019-04-03 Thread Perumal Raj
Hi Stephen

Thanks for the response ,

Version : 9.2
We never ran VACUUM FULL in the past, All we are doing just manual vacuum (
Weekly ) .
Based on the Observation ( test run ) , we were able to reclaim 150 GB out
of 500 GB .

We are heading to a planned down time soon , So thinking  to run FULL
during that time .

Reason behind to run FULL : 1. Reclaim unused space which postgres never
using it.
2. Considering  FULL may
increase the performance.
3. Daily backup size and
time  will be reduced after reclaiming 150GB.

Thanks,
Raj


On Wed, Apr 3, 2019 at 1:16 PM Stephen Eilert 
wrote:

> > Ideally VACUUM FULL should not require a giant lock on the table.
>
> It is a massively expensive operation, regardless. Not sure if it is
> something you want to run in production outside a maintenance window.
>
> I would argue that frequent vacuum full is an antipattern. This will
> become a matter of superstition in your company.
>
> If db size growth is a problem, make autovacuum more agressive. Or run
> your manual vacuum job (not full) more often than a week. Daily, if you
> have to. This will not reclaim disk space as reported by the OS, but it
> should make the space available for new row versions, so db should mostly
> stop growing from the OS point of view(mostly, because you may be adding
> new data, right?). If it is still a problem, then there may be something
> else going on.
>
> Which PG version is that?
>
>
> — Stephen
> On Apr 3, 2019, 10:02 AM -0700, Perumal Raj , wrote:
>
> Hi All
>
> Thanks for all your valuable  inputs,
>
> Here is some more data,
>
> Though we have 150 GB free space spread across 500 Tables , Every
> alternative day DB is growing with 1 GB rate.
> Also,We have manual vacuum job scheduled to run weekly basis, So seems to
> be space is not reusing all the time ?
>
> So conclude the requirement here , The only way to parallelism is multiple
> script. And no need to do REINDEX exclusively.
> Question : Do we need to consider  Table dependencies while preparing
> script in order to avoid table locks during vacuum full ?
>
> At present Maintenance work memory set to 20 GB.
> Question : Do we need to tweak any other parameters ?
>
> Note:
> We are planning this activity with Application Downtime only.
>
> Let me know if i missed anything.
>
> Regards,
> Raj
>
>
>
>
>
> On Wed, Apr 3, 2019 at 8:42 AM rihad  wrote:
>
>> > And future updates can reuse it, too (an update is very similar to an
>> > insert+delete).
>>
>>
>> Hm, then it's strange our DB takes 6 times as much space compared to
>> freshly restored one (only public schema is considered).
>>
>> > Not if autovacuum has a chance to run between updates.
>>
>> Ours is run regularly, although we had to tweak it down not to interfere
>> with normal database activity, so it takes several hours each run on the
>> table. We did that by setting autovacuum_vacuum_scale_factor = 0.05 from
>> default 0.2.
>>
>>
>>


Re: Recommendation to run vacuum FULL in parallel

2019-04-10 Thread Perumal Raj
Thanks Kevin for the inputs,

In my Case there are 500+ Tables and biggest chunk 30GB ( Table only) + its
indexes.
So i have created 6 batches and executed in parallel . All my scripts
completed in 2 Hours and my DB size came down from 500GB to 300GB.

Yes i do see CPU spike, But i did whole activity with full apps down time.

Going forward i am going to run vacuum daily basis to maintain the DB size.

Also Table/DB Age came down drastically.

Thanks
Raj

On Thu, Apr 4, 2019 at 12:53 PM Kevin Brannen  wrote:

> *From:* Perumal Raj 
>
> So conclude the requirement here , The only way to parallelism is multiple
> script. And no need to do REINDEX exclusively.
>
> Question : Do we need to consider  Table dependencies while preparing
> script in order to avoid table locks during vacuum full ?
>
>
>
> We have a small bash script (see below) that get the list of tables and
> their sizes, sorted smallest to largest, and do “vacuum full” one at a time
> because (as someone else pointed out) this is very I/O intensive. That
> order also helps to ensure we finish because some of our installs are at
> the edge of running out of space (an issue we’re dealing with). I probably
> wouldn’t have a problem doing 2 at a time, but we do this in the middle of
> the night when activity is lowest and it only takes 1-2 hours, so we’re
> good with it. It sounds like you have a lot more data though.
>
>
>
> You might also consider putting the data into different tablespaces which
> are spread over multiple disks to help I/O. If you can, use SSD drives,
> they help with speed quite a bit. 😊
>
>
>
> Don’t worry about table dependencies. This is a physical operation, not a
> data operation.
>
>
>
> HTH,
>
> Kevin
>
>
>
> $PGPATH/psql -t -c "
>
> WITH s AS (SELECT nspname || '.' || relname AS TABLE_NAME,
> pg_total_relation_size(c.oid) AS total_bytes
>
>   FROM pg_class c
>
>   LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
>
>   WHERE relkind = 'r' AND nspname NOT IN ( 'pg_catalog',
> 'information_schema' )
>
>   ORDER BY 2 )
>
> SELECT table_name FROM s
>
> " |
>
> while read t ; do echo "" ; echo $t; $PGPATH/vacuumdb -w -z -f -t $t ;
> done
>
> ###
> This e-mail transmission, and any documents, files or previous e-mail
> messages attached to it, may contain confidential information. If you are
> not the intended recipient, or a person responsible for delivering it to
> the intended recipient, you are hereby notified that any disclosure,
> distribution, review, copy or use of any of the information contained in or
> attached to this message is STRICTLY PROHIBITED. If you have received this
> transmission in error, please immediately notify us by reply e-mail, and
> destroy the original transmission and its attachments without reading them
> or saving them to disk. Thank you.
>


Centos : Load Average :OS Process Name : migration/1, migration/2 , migration/n

2019-05-22 Thread Perumal Raj
Hi All,

We have recently migrated postgres DB to out of of server ( Centos 6.9 ) .
Both Source and Target versions of OS/DB are same . Also Configuration is
Apple-Apple.

But We started seeing lot of process name 'migration' at OS Level in new
server which is triggering Load average most of the time  .

Some sample process from top command,

67 root  RT   0 000 R 34.8  0.0 913:06.38
[migration/16]

   155 root  RT   0 000 S 32.2  0.0 845:46.53
[migration/38]

35 root  RT   0 000 S 27.6  0.0 927:15.27
[migration/8]

11 root  RT   0 000 S 21.4  0.0   1033:45
[migration/2]

   131 root  RT   0 000 S 21.4  0.0 812:00.70
[migration/32]

87 root  RT   0 000 S 20.7  0.0 394:28.32
[migration/21]

Anyone come-across this situation ? Please share your thoughts .

Thanks,
Raju


Re: Centos : Load Average :OS Process Name : migration/1, migration/2 , migration/n

2019-05-22 Thread Perumal Raj
Thanks for the responses,

Adrian,

top -c not showing much , Process name itself [ Migration/n]

Thomas,

Thanks for hitting same page, yes i came through it , But was wondering any
one faced same issue in our community.

Regards,
Raju

On Wed, May 22, 2019 at 2:28 PM Thomas Munro  wrote:

> On Thu, May 23, 2019 at 9:18 AM Perumal Raj  wrote:
> > We have recently migrated postgres DB to out of of server ( Centos 6.9 )
> .
> > Both Source and Target versions of OS/DB are same . Also Configuration
> is Apple-Apple.
> >
> > But We started seeing lot of process name 'migration' at OS Level in new
> server which is triggering Load average most of the time  .
> >
> > Some sample process from top command,
> >
> > 67 root  RT   0 000 R 34.8  0.0 913:06.38
> [migration/16]
> >155 root  RT   0 000 S 32.2  0.0 845:46.53
> [migration/38]
> > 35 root  RT   0 000 S 27.6  0.0 927:15.27
> [migration/8]
> > 11 root  RT   0 000 S 21.4  0.0   1033:45
> [migration/2]
> >131 root  RT   0 000 S 21.4  0.0 812:00.70
> [migration/32]
> > 87 root  RT   0 000 S 20.7  0.0 394:28.32
> [migration/21]
> >
> > Anyone come-across this situation ? Please share your thoughts .
>
> I don't know anything about this, but I found a claim (without much
> real explanation) that Linux < 3.6.11 had some kind of problem in this
> area, could be relevant:
>
>
> https://serverfault.com/questions/674685/kernel-processes-periodically-eating-cpu-during-high-load
> https://bugs.gentoo.org/394487
> https://bugzilla.kernel.org/show_bug.cgi?id=47341
>
> --
> Thomas Munro
> https://enterprisedb.com
>


Re: Centos : Load Average :OS Process Name : migration/1, migration/2 , migration/n

2019-05-26 Thread Perumal Raj
Thanks for the update Ron,

Hopefully i will get some lead further on this issue, Will keep u posted.

On Wed, May 22, 2019 at 6:07 PM Ron  wrote:

> On 5/22/19 4:18 PM, Perumal Raj wrote:
>
> Hi All,
>
> We have recently migrated postgres DB to out of of server ( Centos 6.9 ) .
> Both Source and Target versions of OS/DB are same . Also Configuration is
> Apple-Apple.
>
> But We started seeing lot of process name 'migration' at OS Level in new
> server which is triggering Load average most of the time  .
>
> Some sample process from top command,
>
> 67 root  RT   0 000 R 34.8  0.0 913:06.38
> [migration/16]
>
>155 root  RT   0 000 S 32.2  0.0 845:46.53
> [migration/38]
>
> 35 root  RT   0 000 S 27.6  0.0 927:15.27
> [migration/8]
>
> 11 root  RT   0 000 S 21.4  0.0   1033:45
> [migration/2]
>
>131 root  RT   0 000 S 21.4  0.0 812:00.70
> [migration/32]
>
> 87 root  RT   0 000 S 20.7  0.0 394:28.32
> [migration/21]
>
> Anyone come-across this situation ? Please share your thoughts .
>
>
> I googled "linux *root* migration" and found this:
>
> https://serverfault.com/questions/674685/kernel-processes-periodically-eating-cpu-during-high-load
>
> It has some pretty good answers.
>
> --
> Angular momentum makes the world go 'round.
>


Flood Warning message : user=[unknown],db=[unknown],host= WARNING: pg_getnameinfo_all() failed: Temporary failure in name resolution

2019-06-05 Thread Perumal Raj
Hi All,

We have recently noticed in our development environment pg_log with flooded
message.

[64459]: [1-1] user=[unknown],db=[unknown],host= WARNING:
 pg_getnameinfo_all() failed: Temporary failure in name resolution
[64463]: [1-1] user=[unknown],db=[unknown],host= WARNING:
 pg_getnameinfo_all() failed: Temporary failure in name resolution
[64464]: [1-1] user=[unknown],db=[unknown],host= WARNING:
 pg_getnameinfo_all() failed: Temporary failure in name resolution
[64548]: [1-1] user=[unknown],db=[unknown],host= WARNING:
 pg_getnameinfo_all() failed: Temporary failure in name resolution
[64551]: [1-1] user=[unknown],db=[unknown],host= WARNING:
 pg_getnameinfo_all() failed: Temporary failure in name resolution
[64553]: [1-1] user=[unknown],db=[unknown],host= WARNING:
 pg_getnameinfo_all() failed: Temporary failure in name resolution

Not sure , How to address this message ,

Much appreciated if some one give light on this.

Version . : 9.2.23

Regards,
Raj


Re: Flood Warning message : user=[unknown],db=[unknown],host= WARNING: pg_getnameinfo_all() failed: Temporary failure in name resolution

2019-06-05 Thread Perumal Raj
Thanks Steve for the response, Yes we had DNS server change.
But conf file updated with right DNS server and its resolving when i
crosscheck with nslookup against clinet_addr

By the way , What is the right syntax for LDAP configuration ,

I am using the one below, and getting LDAP authentication error though i
was able to login to server with same password.

hostall all 0.0.0.0/0   ldap ldapserver=ldap.xxx.com
ldapport=389 ldaptls=1 ldapbasedn="dc=domain,dc=com"
ldapbinddn="cn=auth_user,dc=domain,dc=com"
ldapbindpasswd=encrypted_password ldapsearchattribute=uid

Version :9..2

Regards,
Raju


On Wed, Jun 5, 2019 at 11:08 AM Steve Crawford <
scrawf...@pinpointresearch.com> wrote:

> On Wed, Jun 5, 2019 at 10:13 AM Perumal Raj  wrote:
>
>> Hi All,
>>
>> We have recently noticed in our development environment pg_log with
>> flooded message.
>>
>> [64459]: [1-1] user=[unknown],db=[unknown],host= WARNING:
>>  pg_getnameinfo_all() failed: Temporary failure in name resolution...
>>
>
> First thing I'd check is that DNS is functioning correctly (including
> local resolution settings or caching name resolvers).
>
> Cheers,
> Steve
>
>


Re: Flood Warning message : user=[unknown],db=[unknown],host= WARNING: pg_getnameinfo_all() failed: Temporary failure in name resolution

2019-06-06 Thread Perumal Raj
Hi All,

We are using the following format for LDAP authentication,

hostall all 0.0.0.0/0   ldap ldapserver=ldap.xxx.com
ldapport=389
ldaptls=1 ldapbasedn="dc=domain,dc=com"
ldapbinddn="cn=auth_user,dc=domain,dc=com"
ldapbindpasswd=encrypted_password ldapsearchattribute=uid

1. It successfully retrieved user information when we do ldapsearch "ldapsearch
-H "ldaps://ldap..com" -W -D "cn=auth_user,ou=people,dc=domain,dc=com"
-b "dc=domain,dc=com" "uid=ldap_user"

2. Same LDAP server is authenticating while ssh / sudo successfully.

But issue is while connecting DB

-bash-4.1$ psql -h dbhost.domain.com -Atc "select 'success'"
-Uldap_user postgres

Password for user ldap_user:

psql: FATAL:  LDAP authentication failed for user "ldap_user"

-bash-4.1$



Is there any way to validate ldap_user password using ldapsearch ?
Or is there any specific format in pg_hba.conf to configure LDAP.

Regards,
Raj

On Wed, Jun 5, 2019 at 4:56 PM Perumal Raj  wrote:

> Thanks Steve for the response, Yes we had DNS server change.
> But conf file updated with right DNS server and its resolving when i
> crosscheck with nslookup against clinet_addr
>
> By the way , What is the right syntax for LDAP configuration ,
>
> I am using the one below, and getting LDAP authentication error though i
> was able to login to server with same password.
>
> hostall all 0.0.0.0/0   ldap ldapserver=ldap.xxx.com
> ldapport=389 ldaptls=1 ldapbasedn="dc=domain,dc=com"
> ldapbinddn="cn=auth_user,dc=domain,dc=com"
> ldapbindpasswd=encrypted_password ldapsearchattribute=uid
>
> Version :9..2
>
> Regards,
> Raju
>
>
> On Wed, Jun 5, 2019 at 11:08 AM Steve Crawford <
> scrawf...@pinpointresearch.com> wrote:
>
>> On Wed, Jun 5, 2019 at 10:13 AM Perumal Raj  wrote:
>>
>>> Hi All,
>>>
>>> We have recently noticed in our development environment pg_log with
>>> flooded message.
>>>
>>> [64459]: [1-1] user=[unknown],db=[unknown],host= WARNING:
>>>  pg_getnameinfo_all() failed: Temporary failure in name resolution...
>>>
>>
>> First thing I'd check is that DNS is functioning correctly (including
>> local resolution settings or caching name resolvers).
>>
>> Cheers,
>> Steve
>>
>>
>