Unable to execute Query in parallel for partitioned table

2021-02-10 Thread Brajendra Pratap
Hi,

I am unable to execute the below in parallel plz suggest how can I achieve
parallelism here.

select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by
trn_transaction_date desc ) AS RowNumber from (
select * from transactions where trn_store_date_id=20201202) abc;

Query plan is as mentioned below :-

explain analyze select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()
OVER (order by trn_transaction_date desc ) AS RowNumber from (
select * from transactions where trn_store_date_id=20201218) abc;
LOG:  duration: 25820.176 ms  statement: explain analyze select count(*)
over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by
trn_transaction_date desc ) AS RowNumber from (
select * from transactions where trn_store_date_id=20201218) abc;

  QUERY PLAN
--
 WindowAgg  (cost=4474843.51..4498066.81 rows=774110 width=21297) (actual
time=21455.495..25241.738 rows=795190 loops=1)
   ->  WindowAgg  (cost=4474843.51..4488390.44 rows=774110 width=21289)
(actual time=10588.494..15311.865 rows=795190 loops=1)
 ->  Sort  (cost=4474843.51..4476778.79 rows=774110 width=21281)
(actual time=10588.422..11771.300 rows=795190 loops=1)
   Sort Key: transactions.trn_transaction_date DESC
   Sort Method: external merge  Disk: 1496856kB
   ->  Result  (cost=0.00..270640.32 rows=774110 width=21281)
(actual time=0.117..4504.159 rows=795190 loops=1)
 ->  Append  (cost=0.00..262899.22 rows=774110
width=21281) (actual time=0.094..1449.532 rows=795190 loops=1)
   ->  Seq Scan on transactions  (cost=0.00..0.00
rows=1 width=47554) (actual time=0.019..0.019 rows=0 loops=1)
 Filter: (trn_store_date_id = 20201218)
   ->  Index Scan using
idx_202012_trn_store_date_id on transactions_202012  (cost=0.56..259028.67
rows=774109 width=21281) (actual time=0.074..1357.764 rows=795190 loops=1)
 Index Cond: (trn_store_date_id = 20201218)
 Planning Time: 116.472 ms
 Execution Time: 25676.098 ms

Note :- We had tried different options like max_worker_processes,
max_parallel_workers,
max_parallel_workers_per_gather,max_parallel_maintenance_worker to execute
it in parallel but no luck.

Please suggest.

Thanks


Re: Unable to execute Query in parallel for partitioned table

2021-02-11 Thread Brajendra Pratap
Hi Albe,

Thank you so much for information, will check this and get back to you if
any help required.

I have a doubt why didn't the parallelism works here ,could u plz guide me?

Thank you so much again.

On Thu, 11 Feb, 2021, 1:23 PM Laurenz Albe, 
wrote:

> On Thu, 2021-02-11 at 05:09 +0530, Brajendra Pratap wrote:
> > I am unable to execute the below in parallel plz suggest how can I
> achieve parallelism here.
> >
> > select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order
> by trn_transaction_date desc ) AS RowNumber from (
> > select * from transactions where trn_store_date_id=20201202) abc;
> >
> > Query plan is as mentioned below :-
> >
> > explain analyze select count(*) over ()
> VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by trn_transaction_date
> desc ) AS RowNumber from (
> > select * from transactions where trn_store_date_id=20201218) abc;
> > LOG:  duration: 25820.176 ms  statement: explain analyze select count(*)
> over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by
> trn_transaction_date desc ) AS RowNumber from (
> > select * from transactions where trn_store_date_id=20201218) abc;
> >
>  QUERY PLAN
> >
> --
> >  WindowAgg  (cost=4474843.51..4498066.81 rows=774110 width=21297)
> (actual time=21455.495..25241.738 rows=795190 loops=1)
> >->  WindowAgg  (cost=4474843.51..4488390.44 rows=774110 width=21289)
> (actual time=10588.494..15311.865 rows=795190 loops=1)
> >  ->  Sort  (cost=4474843.51..4476778.79 rows=774110 width=21281)
> (actual time=10588.422..11771.300 rows=795190 loops=1)
> >Sort Key: transactions.trn_transaction_date DESC
> >Sort Method: external merge  Disk: 1496856kB
> >->  Result  (cost=0.00..270640.32 rows=774110
> width=21281) (actual time=0.117..4504.159 rows=795190 loops=1)
> >  ->  Append  (cost=0.00..262899.22 rows=774110
> width=21281) (actual time=0.094..1449.532 rows=795190 loops=1)
> >->  Seq Scan on transactions
> (cost=0.00..0.00 rows=1 width=47554) (actual time=0.019..0.019 rows=0
> loops=1)
> >  Filter: (trn_store_date_id = 20201218)
> >->  Index Scan using
> idx_202012_trn_store_date_id on transactions_202012  (cost=0.56..259028.67
> rows=774109 width=21281) (actual time=0.074..1357.764 rows=795190 loops=1)
> >  Index Cond: (trn_store_date_id =
> 20201218)
> >  Planning Time: 116.472 ms
> >  Execution Time: 25676.098 ms
> >
> > Note :- We had tried different options like max_worker_processes,
> max_parallel_workers,
> max_parallel_workers_per_gather,max_parallel_maintenance_worker to execute
> it in parallel but no luck.
>
> I don't think parallelization will help you here.
>
> Your problem is probably the "abc.*" in the SELECT list.
>
> There must be really large data in this table, so it takes a long time to
> fetch and
> sort the rows.  Try selecting only the columns you need.
>
> Alternatively, add a LIMIT clause.  Do you really need all 80 rows?
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: Unable to execute Query in parallel for partitioned table

2021-02-11 Thread Brajendra Pratap
Hi Albe,

We have checked as per your suggestion and we are good now.

Thank you !!!


On Thu, 11 Feb, 2021, 8:49 PM Brajendra Pratap, <
brajendra.pratap...@gmail.com> wrote:

> Hi Albe,
>
> Thank you so much for information, will check this and get back to you if
> any help required.
>
> I have a doubt why didn't the parallelism works here ,could u plz guide me?
>
> Thank you so much again.
>
> On Thu, 11 Feb, 2021, 1:23 PM Laurenz Albe, 
> wrote:
>
>> On Thu, 2021-02-11 at 05:09 +0530, Brajendra Pratap wrote:
>> > I am unable to execute the below in parallel plz suggest how can I
>> achieve parallelism here.
>> >
>> > select count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order
>> by trn_transaction_date desc ) AS RowNumber from (
>> > select * from transactions where trn_store_date_id=20201202) abc;
>> >
>> > Query plan is as mentioned below :-
>> >
>> > explain analyze select count(*) over ()
>> VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by trn_transaction_date
>> desc ) AS RowNumber from (
>> > select * from transactions where trn_store_date_id=20201218) abc;
>> > LOG:  duration: 25820.176 ms  statement: explain analyze select
>> count(*) over () VIEWALLROWCOUNT,abc.*,ROW_NUMBER()  OVER (order by
>> trn_transaction_date desc ) AS RowNumber from (
>> > select * from transactions where trn_store_date_id=20201218) abc;
>> >
>>QUERY PLAN
>> >
>> --
>> >  WindowAgg  (cost=4474843.51..4498066.81 rows=774110 width=21297)
>> (actual time=21455.495..25241.738 rows=795190 loops=1)
>> >->  WindowAgg  (cost=4474843.51..4488390.44 rows=774110 width=21289)
>> (actual time=10588.494..15311.865 rows=795190 loops=1)
>> >  ->  Sort  (cost=4474843.51..4476778.79 rows=774110
>> width=21281) (actual time=10588.422..11771.300 rows=795190 loops=1)
>> >Sort Key: transactions.trn_transaction_date DESC
>> >Sort Method: external merge  Disk: 1496856kB
>> >->  Result  (cost=0.00..270640.32 rows=774110
>> width=21281) (actual time=0.117..4504.159 rows=795190 loops=1)
>> >  ->  Append  (cost=0.00..262899.22 rows=774110
>> width=21281) (actual time=0.094..1449.532 rows=795190 loops=1)
>> >->  Seq Scan on transactions
>> (cost=0.00..0.00 rows=1 width=47554) (actual time=0.019..0.019 rows=0
>> loops=1)
>> >  Filter: (trn_store_date_id = 20201218)
>> >->  Index Scan using
>> idx_202012_trn_store_date_id on transactions_202012  (cost=0.56..259028.67
>> rows=774109 width=21281) (actual time=0.074..1357.764 rows=795190 loops=1)
>> >  Index Cond: (trn_store_date_id =
>> 20201218)
>> >  Planning Time: 116.472 ms
>> >  Execution Time: 25676.098 ms
>> >
>> > Note :- We had tried different options like max_worker_processes,
>> max_parallel_workers,
>> max_parallel_workers_per_gather,max_parallel_maintenance_worker to execute
>> it in parallel but no luck.
>>
>> I don't think parallelization will help you here.
>>
>> Your problem is probably the "abc.*" in the SELECT list.
>>
>> There must be really large data in this table, so it takes a long time to
>> fetch and
>> sort the rows.  Try selecting only the columns you need.
>>
>> Alternatively, add a LIMIT clause.  Do you really need all 80 rows?
>>
>> Yours,
>> Laurenz Albe
>> --
>> Cybertec | https://www.cybertec-postgresql.com
>>
>>


Fwd: BUG #16513: Postgresql HA Cluster

2020-06-28 Thread Brajendra Pratap Singh
Hi Team,

Please help us on the below query .

Thanks

-- Forwarded message -
From: Daniel Gustafsson 
Date: Sat, 27 Jun, 2020, 3:16 AM
Subject: Re: BUG #16513: Postgresql HA Cluster
To: , PostgreSQL mailing lists <
pgsql-b...@lists.postgresql.org>


> On 26 Jun 2020, at 16:10, PG Bug reporting form 
wrote:

> Is there any functionality present in postgresql for HA availability where
> we can setup multiple nodes/instances in READ-WRITE mode for single
database
> so that incase of one node/instance failure we can use the 2nd one without
> failure or in less time ,this is just like oracle RAC concept .

This mailinglist is for bugreports against the PostgreSQL server, which the
above isn't.  Please see the -general mailinglist for general questions.
More
information on how to subscribe can be found at:

https://www.postgresql.org/list/

cheers ./daniel


Postgresql HA Cluster

2020-06-28 Thread Brajendra Pratap Singh
Hi All,

Is there any functionality present in postgresql for High Availability
Cluster where we can setup multiple nodes/instances in READ-WRITE mode for
single database so that incase of one node/instance failure it will
automatically failover the traffic to 2nd node/instance (without
failure or in less time) ,this is just like oracle RAC concept .

Here High Availability Cluster means there will be zero downtime incase of
any one node/instance failure.

Please help us to know this.

Thanks & Regards,
Brajendra


Re: Postgresql HA Cluster

2020-06-29 Thread Brajendra Pratap Singh
Hi Albe,

Here is my one more concern regarding patroni and repmgr tool versions
compatibility with centos8 and postgresql 10/11/12 versions, could u plz
provide ur valuable output.

Thanks
Brajendra

On Mon, 29 Jun, 2020, 1:03 PM Laurenz Albe, 
wrote:

> On Sun, 2020-06-28 at 09:10 +0530, Brajendra Pratap Singh wrote:
> > Is there any functionality present in postgresql for High Availability
> Cluster where we can setup
> > multiple nodes/instances in READ-WRITE mode for single database so that
> incase of one node/instance
> > failure it will automatically failover the traffic to 2nd node/instance
> (without
> > failure or in less time) ,this is just like oracle RAC concept .
> >
> > Here High Availability Cluster means there will be zero downtime incase
> of any one node/instance failure.
>
> There is no such functionality built into PostgreSQL.
>
> An architecture like Oracle RAC is not ideal for high availability, since
> the
> ASM/Tablespace/Segment "file system" is a single point of failure.
>
> You can use Solutions like Patroni or repmgr for high availability.
>
> That would not provode a multi-master solution, though.  There are some
> commercial solutions for that, but be warned that it would require
> non-trivial
> changes to your application.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Check Replication lag status

2020-07-02 Thread Brajendra Pratap Singh
Hi,

If my primary postgresql database will crash and there will be some lag
between primary and standby database then how to findout from standby side
that till what duration data will be available on standby postgresql
database?

Thanks,
Brajendra


restore_command for postgresql streaming replication

2020-07-02 Thread Brajendra Pratap Singh
Hi,

Please help us to configure the restore_command on postgresql replication
server's recovery.conf file incase of ERROR: requested wal segment has
already been removed on primary and when we have implemented pgbackrest on
primary database.

Primary database -

archive_command = 'pgbackrest --stanza=uatodb archive-push %p'

Thanks
Singh


Kerberos-Postgresql implementation for user authentication

2020-07-09 Thread Brajendra Pratap Singh
Hi,

Please share the steps to implement the Kerberos with postgresql for user
authentication purpose. Also share the document if any.

Thanks,
Brajendra


Re: Kerberos-Postgresql implementation for user authentication

2020-07-09 Thread Brajendra Pratap Singh
Hi Niels,

Thanks for your prompt response and I apologise for the incomplete
information.
Actually we are trying to implement the Kerberos authentication while
anyone trying to connect with postgresql DB user/role.

Plz help us here with document.

Postgresql-DB version : 9.6/10/11/12
OS version : CentOS 6.9/7/8

Thanks,
Brajendra

On Thu, 9 Jul, 2020, 1:52 PM Niels Jespersen,  wrote:

>
>
>
>
> *Fra:* Brajendra Pratap Singh 
> *Sendt:* 9. juli 2020 09:05
> *Til:* pgsql-gene...@postgresql.org
> *Emne:* Kerberos-Postgresql implementation for user authentication
>
>
>
> Hi,
>
>
>
> Please share the steps to implement the Kerberos with postgresql for user
> authentication purpose. Also share the document if any.
>
>
>
> Thanks,
>
> Brajendra
>
>
>
> If you are trying to do passwordless login for windows users, then this
> explains that scenario quite well. We are using SSPI for end-users.
>
>
>
>
> https://wiki.postgresql.org/wiki/Configuring_for_single_sign-on_using_SSPI_on_Windows
>
>
>
>
>


Postgresql Backup Encryption

2020-07-10 Thread Brajendra Pratap Singh
Hi,

Please let me know the way to encrypt/decrypt the postgresql backup uaing
pg_dump and pgbackrest.

Also let me know the other good possibility if any.

Thanks,
Singh


Wal_keep_segment value too high

2020-07-10 Thread Brajendra Pratap Singh
Hi,

What will happen if the wal_keep_segments value is too high ,is this affect
the database performance anyhow like checkpoint and all or is there because
of this any delay in the replication sync or wal records transfer from
primary to replication postgresql db?

Thanks,
Brajendra


Re: Wal_keep_segment value too high

2020-07-10 Thread Brajendra Pratap Singh
Hi Andreas,

Actually there will be sync gap almost 3-4hours during the ETL jobs between
primary and streaming replication db ,so I just want to make sure is there
any affect in replication bcoz of high value of wal_keep_segments .

Second thing we have implemented pgbackrest in archive_command so archive
name will generate in zipped format along with some binary so what will be
the restore_command for that if we lose any wals from wal directory of
primary db and how it will be applying to replication db via
restore_command means is it require password less ssh between primary and
replica.

Third thing is the high value of wal_keep_segment anyhow affect the
checkpoint operation ,dml operation,select query or data write operation
from memory to disk?

Thanks,
Brajendra


On Fri, 10 Jul, 2020, 5:44 PM Andreas Kretschmer, 
wrote:

> On 10 July 2020 10:26:25 CEST, Brajendra Pratap Singh <
> singh.bpratap...@gmail.com> wrote:
> >Hi,
> >
> >What will happen if the wal_keep_segments value is too high ,is this
>
> wasted disk space.
>
> What do you want to achive?
>
>
> Regards, Andreas
>
>
> --
> 2ndQuadrant - The PostgreSQL Support Company
>


Block Corruption Resolution

2020-08-14 Thread Brajendra Pratap Singh
Hi ,

Is it possible to recover only corrupted blocks in postgresql versions
9.6/10/11/12?

Is the pg_healer extension supported by postgresql or good for the
corrupted block recovery? If yes then is this compatible with all
postgresql versions 9.6/10/11/12?

Thanks,
Brajendra


Schema/ROLE Rename Issue

2020-09-09 Thread Brajendra Pratap Singh
Hi,

Recently we had renamed schema and role but it will not get reflect in the
object definitions ,plz suggest.

OLD SCHEMA/ROLE  -  TEST11
NEW SCHEMA/ROLE -  TEST

But still we are that TEST11 is present in the definition of sone objects
like procudure,functions,triggers etc, so how can we modify the name from
TEST11  to TEST in definition.

Thanks and Regards,
Brajendra


Re: Schema/ROLE Rename Issue

2020-09-09 Thread Brajendra Pratap Singh
Thank you Albe..

Could u plz suggest me the best way to get the definition of functions via
connecting through psql as while I am trying to change the definition
through \ef function_name it won't reflect the changes.

Thanks
Brajendra

On Wed, 9 Sep, 2020, 7:39 PM Laurenz Albe,  wrote:

> On Wed, 2020-09-09 at 17:04 +0530, Brajendra Pratap Singh wrote:
> > Recently we had renamed schema and role but it will not get reflect in
> the object definitions ,plz suggest.
> >
> > OLD SCHEMA/ROLE  -  TEST11
> > NEW SCHEMA/ROLE -  TEST
> >
> > But still we are that TEST11 is present in the definition of sone
> objects like procudure,functions,triggers etc, so how can we modify the
> name from TEST11  to TEST in definition.
> >
>
> You cannot habe that automatically.
>
> While in view and index definitions as well as column default clauses,
> the internal object identifier is used, so that renaming doesn't break
> anything, functions are stored as string literals and parsed at
> execution time.
>
> You'll have to edit all your functions (or undo the renaming).
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


DB upgrade version compatibility

2020-09-28 Thread Brajendra Pratap Singh
Can we upgrade the 9.6version open source postgresql db directly to
12.3version ?

Thanks and Regards,
Singh


Profile Creation

2020-10-02 Thread Brajendra Pratap Singh
Hi All,

How can we create a user profile in open postgresql db?

Thanks,
Singh


Re: Profile Creation

2020-10-03 Thread Brajendra Pratap Singh
Hi Adrian,

Here the user profile means which contains the following functionality same
as in  EDB profile contains like password_verify_function,
password_life_time, password_lock_time etc.

Thanks,
Singh

On Sat, 3 Oct, 2020, 2:14 AM Adrian Klaver, 
wrote:

> On 10/2/20 1:40 PM, Brajendra Pratap Singh wrote:
> > Hi All,
> >
> > How can we create a user profile in open postgresql db?
>
> You are going to need to be more specific about what you consider a user
> profile to be.
>
> >
> > Thanks,
> > Singh
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Partitioned Table conversion to Non-Partition table in postgresql open source

2020-12-28 Thread Brajendra Pratap Singh
Hi,

Currently we are running on open-postgresql 9.6 version with some
partitioned Table which we need to convert from Inheritance partitioning to
declarative partitioning during migration to 12.3 open-postgresql version.

Is there any safest and faster way to fulfill this requirement, Please
suggest?

Thanks and regards,
Singh


Parallelism on Partitioning .

2021-01-18 Thread Brajendra Pratap Singh
Hi ,

We are trying to assign the parallel worker or execute the query in
parallel manner on partitioned Tables and partitioned master table but not
able to do that ,could u plz suggest .

Current Postgresql version :- 9.6

Fyi, We have tried with all parameters which can allow the parallel query
execution.

Thanks..


Pgbackrest version 2.28 Bug/Issue

2021-03-02 Thread Brajendra Pratap Singh
Hi ,

We are getting the below issue while executing the backup through
pgbackrest.

2021-03-02 02:10:01.620 P00   INFO: backup command begin 2.28:
--archive-check --archive-copy --compress --db-timeout=7200
--log-level-console=detail --log-level-file=detail
--log-level-stderr=detail --log-path=/apayrman/pgbackrest/backup/log
--pg1-path=/apayothr/pgsql/12.3/data --process-max=12
--protocol-timeout=9000 --repo1-path=/apayrman/pgbackrest
--repo1-retention-archive=8 --repo1-retention-full=2 --stanza=uatpgdbserver
--start-fast --stop-auto
2021-03-02 02:10:01.622 P00   WARN: unable to check pg-1: [DbConnectError]
unable to connect to 'dbname='postgres' port=5432': could not connect to
server: No such file or directory
Is the server running locally and
accepting
connections on Unix domain socket
"/var/run/postgresql/.s.PGSQL.5432"?
2021-03-02 02:10:01.622 P00  ERROR: [056]: unable to find primary cluster -
cannot proceed
2021-03-02 02:10:01.622 P00   INFO: backup command end: aborted with
exception [056]

DB version :- postgresql 12.3(open source)

OS :- Centos8

Pgbackrest version :- 2.28

Pgbackrest.conf parameters :-

[uatpgdbserver]
pg1-path=/apayothr/pgsql/12.3/data
db-timeout=7200
db1-user=postgres

[global]
repo1-path=/apayrman/pgbackrest
compress=y
retention-archive=8
retention-full=2
process-max=12
log-path=/apayrman/pgbackrest/backup/log
start-fast=y
stop-auto=y
protocol-timeout=9000
archive-check=y
archive-copy=y

Please suggest the solution for above issue.

Thanks,
Singh


Parallel Index Scan Implementation

2023-11-27 Thread Brajendra Pratap Singh
Hi Postgresql Experts,

We have a basic requirement where we need to implement the parallel index
scan instead of parallel seq scan at table level. Please suggest the best
way to do this.

Postgresql DB Version : 12.6

Table size : 2-5 GB

OS : Centos-7

RAM : 96 GB
CPU : 32

shared_buffer=20GB
Effective_cache_size=60GB

Max_worker_process = 32
Max_parallel_worker=32
Max_parallel_worker_per_gather=8

Please let me know if any further details required .

Thanks & Regards,

Singh


XX001ERROR: found xmin from before relfrozenxid for pg_authid and pg_database

2021-06-21 Thread Brajendra Pratap Singh
Hi,

Good Morning...

We are getting below issue on postgresql 9.6.8 version

XX001ERROR: found xmin 3355284520 from before relfrozenxid 1097492040
XX001CONTEXT: automatic vacuum of table "xyz.pg_catalog.pg_authid"

XX001ERROR: found xmin 3355284522 from before relfrozenxid 1097492055
XX001CONTEXT: automatic vacuum of table "xyz.pg_catalog.pg_database"

# Replacing the actual DB name with xyz.

Please provide the reason and solution of this issue.

Thanks and regards,
Singh