Regrading brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
Hi
Respected postgres community members

I have created BRIN index on few columns of the table without any issues.
But i am unable to create BRIN index on one column of the table as i got
error listed below


[local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
("dFetch");
ERROR:  data type boolean has no default operator class for access method
"brin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

 below is the column description:
Column datatype   collationnullable   defaultstorage

 dFetchboolean false
plain



so please help in creating of the BRIN index on above column of the table .



Regards

Durgamahesh Manne


*Regarding brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
Hi
Respected postgres community members

I have created BRIN index on few columns of the table without any issues.
But i am unable to create BRIN index on one column of the table as i got
error listed below


[local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
("dFetch");
ERROR:  data type boolean has no default operator class for access method
"brin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

 below is the column description:
Column datatype   collationnullable   defaultstorage

 dFetchboolean false
plain



so please help in creating of the BRIN index on above column of the table .



Regards

Durgamahesh Manne


Re: Regrading brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman  wrote:

>
>
> *From:* Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> *Sent:* Wednesday, September 19, 2018 10:04 AM
> *To:* Igor Neyman 
> *Subject:* Re: Regrading brin_index on required column of the table
>
> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
> wrote:
>
>
>
> *From:* Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> *Sent:* Wednesday, September 19, 2018 9:43 AM
> *To:* PostgreSQL mailing lists 
> *Subject:* Regrading brin_index on required column of the table
>
> Hi
>
> Respected postgres community members
>
>
>
> I have created BRIN index on few columns of the table without any issues.
> But i am unable to create BRIN index on one column of the table as i got
> error listed below
>
>
>
>
>
> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
> ("dFetch");
>
> ERROR:  data type boolean has no default operator class for access method
> "brin"
>
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
>
>
>
>  below is the column description:
>
> Column datatype   collationnullable   defaultstorage
>
>
>
>  dFetchboolean         false
>   plain
>
>
>
>
>
>
>
> so please help in creating of the BRIN index on above column of the table .
>
>
>
>
>
>
>
> Regards
>
>
>
> Durgamahesh Manne
>
>
>
> Why would you want BRIN index on Boolean-type column?
>
> What kind of interval will you specify?
>
>
>
> Regards,
>
> Igor Neyman
>
>
>
>
>
>
>
>
>
>  Hi
>
>
>
>
>
>  I have complex query like for ex  select  distinct
> sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec join
> "table2" sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
> where  rec."bFetch"=false and sub_head."bFetch"=false ;
>
>
>
>
>
> Query taken around 7 minutes time to execute without indexes on required
> columns
>
>
>
>  SO  i need to execute this distinct query at less time by creating
> indexes on required columns of the tables
>
>
>
> i have created brin indexes on vchsubmitterscode of two tables
>
>
>
> i am not able to create brin indexes on bfetch tables as i got a error
> ERROR:  data type boolean has no default operator class for access method
> "brin"
>
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
>
>
>
>
>
> so please help in creating of the BRIN index on above column of the table
> as i need to reduce the query execution time
>
>
>
>
>
> Regards
>
>
>
> Durgamahesh Manne
>
> Again, BRIN indexes are not design to work on Boolean columns. If you want
> to index Boolean column, just create regular BTREE index.
>
> Regards,
>
> Igor
>
>
>
>
>

 Hi

I have already tried with BTREE indexes & HASH indexes on required columns
.but distinct query execution time was not reduced


Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns


Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer 
wrote:

>
>
> Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> > I have created BRIN index on few columns of the table without any
> > issues. But i am unable to create BRIN index on one column of the
> > table as i got error listed below
> >
> >
> > [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using
> > brin ("dFetch");
> > ERROR:  data type boolean has no default operator class for access
> > method "brin"
> > HINT:  You must specify an operator class for the index or define a
> > default operator class for the data type.
>
> honestly, a BRIN-Index on a bool-column doesn't make much sense. What do
> you want to achive? Maybe a partial index with a where-condition on that
> column makes much more sense.
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi


I want to execute distinct query at less possible time

for that reason ,Even i have already tried with BTREE indexes & HASH
indexes on required columns .distinct query execution time was not reduced

select  distinct sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec join
"table2" sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns



Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
Hi

I have complex query like for ex  select  distinct
sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec join
"table2" sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;


Even i have already tried with BTREE indexes & HASH indexes on required
columns .distinct query execution time was not reduced


Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns


SO please help in reducing the distinct query execution time

Regards

Durgamahesh Manne


On Wed, Sep 19, 2018 at 7:21 PM Durgamahesh Manne 
wrote:

> Hi
> Respected postgres community members
>
> I have created BRIN index on few columns of the table without any issues.
> But i am unable to create BRIN index on one column of the table as i got
> error listed below
>
>
> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
> ("dFetch");
> ERROR:  data type boolean has no default operator class for access method
> "brin"
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
>
>  below is the column description:
> Column datatype   collationnullable   defaultstorage
>
>  dFetchboolean false
>   plain
>
>
>
> so please help in creating of the BRIN index on above column of the table .
>
>
>
> Regards
>
> Durgamahesh Manne
>


Re: Regrading brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
Hi

As per your suggestion


i ran explain analyse for distinct query

the size of the table1 is 30mb
the size of the table2 is 368kb

 EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
rec."vchFundUnitPrice"
,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
join  table2 sub_head on rec."vchS
ubmittersCode"=sub_head."vchSubmittersCode" where  rec."bFetch"=false and
sub_head."bFetch"=false ;



  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=370 loops=1)
  |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
time=326397.550..372470.846 rows=4050 loops=1)
|
| Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
| Sort Method: external merge  Disk: 3923224kB

   |
| ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
(actual time=0.339..6939.296 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11
width=80) (actual time=0.011..56.998 rows=10 loops=1)
  |
| Filter: (NOT "bFetch")

   |
| Rows Removed by Filter: 4706

   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.319..0.319 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
|   Filter: (NOT "bFetch")

   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.237 ms

|
| Execution time: 390252.089 ms


so i am unable to reduce the query execution time as it is taken around 7
minutes to execute with indexes & without indexes

 please help in reducing the query execution time


Regards
Durgamahesh Manne


On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:

> The results of explain analyze would shed light on the core problem.
>
> My guess is that your conditions are not very selective - ie. most
> records in both tables have bFetch = false - and therefore you are
> retrieving most of your data and that is what's taking 7 minutes. No
> index is going to fix that.
>
> If those boolean values are distributed very unevenly (say 99.9% has
> false and 0.1% has true), you may get better results by excluding the
> records with 'true' values (instead of including those that are
> 'false'), for example by using a where not exists(...) subquery.
>
> Obviously, that still won't help if you're just fetching a lot of data.
> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>  wrote:
> >
> >
> >
> >
> >
> >
> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman 
> wrote:
> >>
> >>
> >>
> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> >> Sent: Wednesday, September 19, 2018 10:04 AM
> >> To: Igor Neyman 
> >> Subject: Re: Regrading brin_index on required column of the table
> >>
> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
> wrote:
> >>
> >>
> >>
> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> >> Sent: Wednesday, September 19, 2018 9:43 AM
> >> To: PostgreSQL mailing lists 
> >> Subject: Regrading brin_index on required column of the table
> >>
> >> Hi
> >>
> >> Respected postgres community members
> >>
> >>
> >>
> >> I have created BRIN index on few columns of the table without any
> issues. But i am unable to create BRIN index on one column of the table as
> i got error listed below
> >>
> >>
> >>
> >>
> >>
> >> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using
> brin ("dFetch");
> >>
> >> ERROR:  data type boolean has no default operator class for access
> method "brin"
> >>
&

Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 8:27 PM Andreas Kretschmer 
wrote:

>
>
> Am 19.09.2018 um 16:43 schrieb Durgamahesh Manne:
> >
> >
> > On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer
> > mailto:andr...@a-kretschmer.de>> wrote:
> >
> >
> >
> > Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> > > I have created BRIN index on few columns of the table without any
> > > issues. But i am unable to create BRIN index on one column of the
> > > table as i got error listed below
> > >
> > >
> > > [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei
> > using
> > > brin ("dFetch");
> > > ERROR:  data type boolean has no default operator class for access
> > > method "brin"
> > > HINT:  You must specify an operator class for the index or define a
> > > default operator class for the data type.
> >
> > honestly, a BRIN-Index on a bool-column doesn't make much sense.
> > What do
> > you want to achive? Maybe a partial index with a where-condition
> > on that
> > column makes much more sense.
> >
> >
> > Regards, Andreas
> >
> > --
> > 2ndQuadrant - The PostgreSQL Support Company.
> > www.2ndQuadrant.com <http://www.2ndQuadrant.com>
> >
> >
> >
> > Hi
> >
> >
> > I want to execute distinct query at less possible time
> >
> > for that reason ,Even i have already tried with BTREE indexes & HASH
> > indexes on required columns .distinct query execution time was not
> reduced
> >
> > select  distinct sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec
> > join  "table2" sub_head on
> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> > rec."bFetch"=false and sub_head."bFetch"=false ;
> >
> > Query taken around 7 minutes time to execute with BTREE indexes & HASH
> > indexes on required columns
> >
>
> try an index like
>
> create index idx_t1 on table1 ("vchSubmittersCode") where "bFetch"=false
> and
> create index idx_t2 on table2 ("vchSubmittersCode") where "bFetch"=false
>
> and check if the plan changed and the indexes are in use. You can use
> create index concurrently to prevent lockings.
>
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi
 As per your suggestion

 i have created  partial  indexes with where condition on required columns

distinct query execution time was not reduced as query taken around 7
minutes time to execute with indexes & without indexes

so i ran explain analyze for distinct query

EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
rec."vchFundUnitPrice"
,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
join  table2 sub_head on
 rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
rec."bFetch"=false and sub_head."bFetch"=false ;


 Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=370 loops=1)
  |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
time=326397.550..372470.846 rows=4050 loops=1)
|
| Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
| Sort Method: external merge  Disk: 3923224kB

   |
| ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
(actual time=0.339..6939.296 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11
width=80) (actual time=0.011..56.998 rows=10 loops=1)
  |
| Filter: (NOT "bFetch")

   |
| Rows Removed by Filter: 4706

   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.319..0.319 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
|   Filter: (NOT "bFetch")

   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.237 ms

|
| Execution time: 390252.089 ms



so please help in reducing the distinct query execution time

Regrads

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer 
wrote:

> Hi,
>
>
> the problem is there:
>
>
> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> > time=326397.551..389515.863 rows=370 loops=1)
> > |
> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
> > (actual time=326397.550..372470.846 rows=4050 loops=1)
> >   |
> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> > | Sort Method: external merge  Disk: 3923224kB
> >  |
>
> Please check the execution time without DISTINCT.
>
> Regards, Andreas
> --
>
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
hi

as per your request

i ran explain analyze query without distinct


+--+
|
QUERY PLAN
|
+--+
| Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
time=0.429..6763.942 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
(actual time=0.006..48.610 rows=10 loops=1)|
| Filter: (NOT "bFetch")
   |
| Rows Removed by Filter: 4706
   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.404..0.404 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
width=11) (actual time=0.004..0.326 rows=405 loops=1) |
|   Filter: (NOT "bFetch")
   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.351 ms

|
| Execution time: 8371.819 ms

|
+--+
(12 rows)


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
hi
as per your request
i ran below query without distinct

select  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

the above query took around 47 sec to execute
the above query took around 7 minutes to execute with distinct



On Thu, Sep 20, 2018 at 3:28 PM Durgamahesh Manne 
wrote:

>
>
> On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>> Hi,
>>
>>
>> the problem is there:
>>
>>
>> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
>> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>> > time=326397.551..389515.863 rows=370 loops=1)
>> > |
>> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>> > (actual time=326397.550..372470.846 rows=4050 loops=1)
>> >   |
>> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>> > | Sort Method: external merge  Disk: 3923224kB
>> >  |
>>
>> Please check the execution time without DISTINCT.
>>
>> Regards, Andreas
>> --
>>
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
> hi
>
> as per your request
>
> i ran explain analyze query without distinct
>
>
>
> +--+
> |
> QUERY PLAN
> |
>
> +--+
> | Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
> time=0.429..6763.942 rows=4050 loops=1)
> |
> |   Hash Cond: ((rec."vchSubmittersCode")::text =
> (sub_head."vchSubmittersCode")::text)
>   |
> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
> (actual time=0.006..48.610 rows=10 loops=1)|
> | Filter: (NOT "bFetch")
>
>  |
> | Rows Removed by Filter: 4706
>
>  |
> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
> time=0.404..0.404 rows=405 loops=1)
> |
> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>
> |
> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
> width=11) (actual time=0.004..0.326 rows=405 loops=1) |
> |   Filter: (NOT "bFetch")
>
>  |
> |   Rows Removed by Filter: 375
>
> |
> | Planning time: 0.351 ms
>
> |
> | Execution time: 8371.819 ms
>
> |
>
> +--+
> (12 rows)
>
>


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:41 PM Durgamahesh Manne 
wrote:

> hi
> as per your request
> i ran below query without distinct
>
> select  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
> table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
> where  rec."bFetch"=false and sub_head."bFetch"=false ;
>
> the above query took around 47 sec to execute
> the above query took around 7 minutes to execute with distinct
>
>
>
> On Thu, Sep 20, 2018 at 3:28 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer <
>> andr...@a-kretschmer.de> wrote:
>>
>>> Hi,
>>>
>>>
>>> the problem is there:
>>>
>>>
>>> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
>>> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>>> > time=326397.551..389515.863 rows=370 loops=1)
>>> > |
>>> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>>> > (actual time=326397.550..372470.846 rows=4050 loops=1)
>>> >   |
>>> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>>> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>>> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>>> > | Sort Method: external merge  Disk: 3923224kB
>>> >  |
>>>
>>> Please check the execution time without DISTINCT.
>>>
>>> Regards, Andreas
>>> --
>>>
>>> 2ndQuadrant - The PostgreSQL Support Company.
>>> www.2ndQuadrant.com
>>>
>>>
>> hi
>>
>> as per your request
>>
>> i ran explain analyze query without distinct
>>
>>
>>
>> +--+
>> |
>> QUERY PLAN
>> |
>>
>> +--+
>> | Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
>> time=0.429..6763.942 rows=4050 loops=1)
>> |
>> |   Hash Cond: ((rec."vchSubmittersCode")::text =
>> (sub_head."vchSubmittersCode")::text)
>>   |
>> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
>> (actual time=0.006..48.610 rows=10 loops=1)|
>> | Filter: (NOT "bFetch")
>>
>>  |
>> | Rows Removed by Filter: 4706
>>
>>  |
>> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
>> time=0.404..0.404 rows=405 loops=1)
>> |
>> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>>
>>   |
>> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
>> width=11) (actual time=0.004..0.326 rows=405 loops=1) |
>> |   Filter: (NOT "bFetch")
>>
>>  |
>> |   Rows Removed by Filter: 375
>>
>>   |
>> | Planning time: 0.351 ms
>>
>>   |
>> | Execution time: 8371.819 ms
>>
>>   |
>>
>> +--+
>> (12 rows)
>>
>>

Hi


Query was executed at less time without distinct

As well as query was taking around 7 minutes to complete execution with
distinct

 select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

I need to execute above distinct query at less time as distinct query was
taking more time to execute  even i have created indexes on required
columns of the tables


Regards
Durgamahesh Manne


Re: Regrading brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:12 PM Durgamahesh Manne 
wrote:

>
>
> Hi
>
> As per your suggestion
>
>
> i ran explain analyse for distinct query
>
> the size of the table1 is 30mb
> the size of the table2 is 368kb
>
>  EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
> ,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
> rec."vchFundUnitPrice"
> ,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> join  table2 sub_head on rec."vchS
> ubmittersCode"=sub_head."vchSubmittersCode" where  rec."bFetch"=false and
> sub_head."bFetch"=false ;
>
>
>
>   Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> time=326397.551..389515.863 rows=370 loops=1)
>   |
> |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
> time=326397.550..372470.846 rows=4050 loops=1)
> |
> | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> | Sort Method: external merge  Disk: 3923224kB
>
>  |
> | ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
> (actual time=0.339..6939.296 rows=4050 loops=1)
> |
> |   Hash Cond: ((rec."vchSubmittersCode")::text =
> (sub_head."vchSubmittersCode")::text)
>   |
> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66
> rows=11 width=80) (actual time=0.011..56.998 rows=10 loops=1)
>   |
> | Filter: (NOT "bFetch")
>
>  |
> | Rows Removed by Filter: 4706
>
>  |
> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
> time=0.319..0.319 rows=405 loops=1)
> |
> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>
> |
> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
> rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
> |   Filter: (NOT "bFetch")
>
>      |
> |   Rows Removed by Filter: 375
>
> |
> | Planning time: 0.237 ms
>
> |
> | Execution time: 390252.089 ms
>
>
> so i am unable to reduce the query execution time as it is taken around 7
> minutes to execute with indexes & without indexes
>
>  please help in reducing the query execution time
>
>
> Regards
> Durgamahesh Manne
>
>
> On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:
>
>> The results of explain analyze would shed light on the core problem.
>>
>> My guess is that your conditions are not very selective - ie. most
>> records in both tables have bFetch = false - and therefore you are
>> retrieving most of your data and that is what's taking 7 minutes. No
>> index is going to fix that.
>>
>> If those boolean values are distributed very unevenly (say 99.9% has
>> false and 0.1% has true), you may get better results by excluding the
>> records with 'true' values (instead of including those that are
>> 'false'), for example by using a where not exists(...) subquery.
>>
>> Obviously, that still won't help if you're just fetching a lot of data.
>> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>>  wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman 
>> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 10:04 AM
>> >> To: Igor Neyman 
>> >> Subject: Re: Regrading brin_index on required column of the table
>> >>
>> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
>> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 9:43 AM
>> >> To: PostgreSQL mailing lists 
>> >> Subject: Regrading brin_index on required column of the tabl

Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer 
wrote:

>
>
> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
> > Query was executed at less time without distinct
> >
> > As well as query was taking around 7 minutes to complete execution
> > with distinct
> >
> >  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> > join table2 sub_head on
> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> > rec."bFetch"=false and sub_head."bFetch"=false ;
> >
> > I need to execute above distinct query at less time as distinct query
> > was taking more time to execute  even i have created indexes on
> > required columns of the tables
> >
>
>  >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> time=326397.551..389515.863 rows=370
> loops=1) |
>  > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
> (actual time=326397.550..372470.846 rows=4050
> loops=1)  |
>  > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>  > | Sort Method: external merge  Disk:
> 3923224kB  |
>
>
> as you can see: there are 40.500.000 rows to sort to filter out
> duplicate rows, the result contains 'only' 3.700.000 rows. But for this
> step the database needs nearly 4TB on-disk. This will, of course, need
> some time.
>
> If you have enough ram you can try to set work_mem to 5 or 6 GB to
> change the plan to a in-memory - sort. But keep in mind, this is
> dangerous! If the machine don't have enough free ram the kernal can
> decide to Out-Of-Memory - killing processes.
>
> What kind of disks do you have? Maybe you can use a separate fast SSD as
> temp_tablespaces?
>
>
> Regards, Andreas
> --
>
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi

sdb[HDD]
sdc[HDD]
sda[HDD]

i checked that there are hdd's in linux

Regards


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 7:25 PM Durgamahesh Manne 
wrote:

>
>
> On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>>
>>
>> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
>> > Query was executed at less time without distinct
>> >
>> > As well as query was taking around 7 minutes to complete execution
>> > with distinct
>> >
>> >  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
>> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
>> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
>> > join table2 sub_head on
>> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
>> > rec."bFetch"=false and sub_head."bFetch"=false ;
>> >
>> > I need to execute above distinct query at less time as distinct query
>> > was taking more time to execute  even i have created indexes on
>> > required columns of the tables
>> >
>>
>>  >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>> time=326397.551..389515.863 rows=370
>> loops=1) |
>>  > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>> (actual time=326397.550..372470.846 rows=4050
>> loops=1)  |
>>  > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>>  > | Sort Method: external merge  Disk:
>> 3923224kB  |
>>
>>
>> as you can see: there are 40.500.000 rows to sort to filter out
>> duplicate rows, the result contains 'only' 3.700.000 rows. But for this
>> step the database needs nearly 4TB on-disk. This will, of course, need
>> some time.
>>
>> If you have enough ram you can try to set work_mem to 5 or 6 GB to
>> change the plan to a in-memory - sort. But keep in mind, this is
>> dangerous! If the machine don't have enough free ram the kernal can
>> decide to Out-Of-Memory - killing processes.
>>
>> What kind of disks do you have? Maybe you can use a separate fast SSD as
>> temp_tablespaces?
>>
>>
>> Regards, Andreas
>> --
>>
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
>>
> Hi
>
> sdb[HDD]
> sdc[HDD]
> sda[HDD]
>
> i checked that there are hdd's in linux
>
> Regards
>
>
>
hi

 distinct query executed very fast as i have increased work_mem value to
3gb temporarily

Thank you very much for this valuable information

now i would like to ask one question related to built in bdr replication

when can be available bdr built in replication for use in production

can i use v3 built in replication in prod?

please let me know  about the configuration of v3 bdr built in replication



Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Durgamahesh Manne
 Sort Method: quicksort  Memory: 23482kB



















|
|   ->  GroupAggregate
(cost=10588651.59..15985055.52 rows=2513 width=1180) (actual
time=847482.207..1802617.045 rows=22461 loops=1)


















  |
| Group Key: j."vchFileName",
j."vchContractEntityRole", j."vchContractNumber",
j."vchContractEntityPersonalIdentifier"


















|
| ->  Sort  (cost=10588651.59..10738549.61
rows=59959206 width=677) (actual time=847464.789..1486679.680 rows=61595746
loops=1)

















|
|   Sort Key: j."vchFileName",
j."vchContractEntityRole", j."vchContractNumber",
j."vchContractEntityPersonalIdentifier"


















   |
|   Sort Method: external merge  Disk:
42758304kB


















|
|   ->  Nested Loop
(cost=0.42..266305.78 rows=59959206 width=677) (actual
time=0.122..73786.837 rows=61595746 loops=1)


















   |
| ->  Seq Scan on "table3" j
(cost=0.00..669.12 rows=25132 width=591) (actual time=0.021..28.338
rows=25132 loops=1)

















 |
|   Filter: (NOT "bFetch")



















   |
| ->  Index Scan using cpr_idx4
on table2 k  (cost=0.42..6.92 rows=365 width=107) (actual time=0.838..2.244
rows=2451 loops=25132)

















 |
|   Index Cond:
(("vchAgentTaxID")::text = (j.vchagenttaxid)::text)


















|
| Planning time: 2.369 ms



















|
| Execution time: 1807771.091 ms








 So i need to execute below query at less time. please help in
 optimising the complex query execution time


Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Durgamahesh Manne
On Fri, Sep 21, 2018 at 7:38 PM Durgamahesh Manne 
wrote:

> Hi
>
> Complex query taken around 30 minutes to execute even i have
> increased work_mem value to 4GB  temporarily as total ram is 16gb
>
> Explain analyze query taken around 30 minutes to execute even i have
> created partial indexes with where condition on required columns
>
>
>
> Below is the query plan for explain analyze query :
>
> | HashAggregate  (cost=16034967.98..16035010.75 rows=4277 width=1710)
> (actual time=1806653.536..1806680.802 rows=26098 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   |
> |   Group Key: (max((v."vchSubmittersCode")::text)),
> (max((v."vchRecordType")::text)), (max((v."vchSequenceNumber")::text)),
> v."vchContractNumber", v."vchContractPartyRoleQualifier", v."vchPartyRole",
> (max((v."vchPartyNatural_Non_NaturalEntity")::text)),
> (max((v."vchPartyLastName")::text)), (max((v."vchPartyFirstName")::text)),
> (max((v."vchPartyMiddleName")::text)), (max((v."vchPartyPrefix")::text)),
> (max((v."vchPartySuffix")::text)), (NULL::text), v."vchPartyID",
> (max((v."vchPartyIDQualifier")::text)),
> (max((v."vchTrustRevocabilityIndicator")::text)), (NULL::text),
> (NULL::text), (max((v."vchFiller1")::text)),
> (max((v."vchRejectCode")::text)),
> (max((v."vchContractEntityAddressLine1")::text)),
> (max((v."vchContractEntityAddressLine2")::text)),
> (max((v."vchContractEntityCity")::text)),
> (max((v."vchContractEntityState")::text)),
> (max((v."vchContractEntityZip")::text)),
> (max((v."vchContractEntityAddressLine3")::text)),
> (max((v."vchContractEntityAddressLine4")::text)),
> (max((v."vchContractEntityAddressLine5")::text)),
> (max((v."vchPartyDateofBirth")::text)),
> (max((v."vchPartyAddressLine1")::text)),
> (max((v."vchContractStatus")::text)), (string_agg(DISTINCT
> btrim((s."vchAgentTaxID")::text), ','::text)), v."vchPartyRole",
> (max((v."vchAdvisorLabel")::text)), v."vchFileName",
> (max((v.vchpartycity)::text)), (max((v.vchpartystate)::text)),
> (max((v.vchpartypostalcode)::text)), (string_agg(DISTINCT
> btrim((s."vchAgentFirstName")::text) || ' '::text) ||
> btrim((s."vchAgentMiddleName")::text)) || ' '::text) ||
> btrim((s."vchAgentLastName")::text)), ','::text)) |
> |   ->  Append  (cost=48944.67..16034550.97 rows=4277 width=1710) (actual
> time=3324.233..1806605.691 rows=26098 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>  |
> | ->  Unique  (cost=48944.67..49103.43 rows=1764 width=1145)
> (actual time=3324.233..3327.824 rows=3637 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   |
> |   ->  Sort  (cost=48944.67..48949.08 rows=1764 width=1145)
> (actual time=3324.232..3324.447 rows=3637 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   |
> | Sort Key: (max((v."vchSubmittersCode")::text)),
> (max((v."vchRecordType")::text)), (max((v."vchSequenceNumber")::text)),
> v."vchContractNumber", v."vchContractPartyRoleQualifier", v."vchPartyRole",
> (max((v."vchPartyNatural_Non_NaturalEntity")::text)),
> (max((v."vchPartyLastName")::text)), (max((v."vchPartyFirstName")::text)),
> (max((v."vchPartyMiddleName")::text)), (max((v."vchPartyPrefix")::text)),
> (max((v."vchPartySuffix")::text)), v."vchPartyID",
> (max((v."vchPartyIDQualifier")::text)),
> (max((v."vchTrustRevocabilityIndicator")::text)),
> (max((v."vchFiller1")::text)), (max((v."vchRejectCode")::text)),
> (max((v."vchContractEntityAddressLine1")::text)),
> (max((v."vchContractEntityAddressLine2")::text)),
> (max((v."vchContractEntityCity")::text)),
> (max((v."vchContractEntityState")::text)),
> (max((v."vchContractEntityZip")::text)),
> (max((v."vchContractEntityAddressLine3")::text)),
> (max((v."vchContractEntityAddressLine4")::text)),
> (max((v.&qu

Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Durgamahesh Manne
On Fri, Sep 21, 2018 at 9:12 PM Andreas Kretschmer 
wrote:

>
>
> Am 21.09.2018 um 17:13 schrieb Durgamahesh Manne:
> > query is below
>
> query and plan still not readable. Store it into a textfile and attach
> it here.
>
>
> Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi

Please find below attached query plan file


Regards

Durgamahesh Manne


Query
Description: Binary data


Re: *Regarding brin_index on required column of the table

2018-09-24 Thread Durgamahesh Manne
Thank you all very much for this information

On Sat, Sep 22, 2018 at 12:38 AM Alban Hertroys  wrote:

>
>
> > On 21 Sep 2018, at 17:49, Durgamahesh Manne 
> wrote:
> >
> > 
>
> Considering how hard you try to get rid of duplicates, I'm quite convinced
> that you're at least short a few join conditions. Getting rid of duplicates
> early has the added benefit of having to aggregate fewer rows, which should
> drastically improve the performance of this query.
>
> In addition, your approach of using both distinct and group by raises a
> red flag that you're fighting symptoms (most likely: duplicate results)
> without understanding their causes. I can't think of a single situation
> where both are justified in the same (sub-)query.
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>


regarding bdr extension

2018-09-28 Thread Durgamahesh Manne
Hi

This is regarding bdr extension issue. I got below error at the time i have
tried to create the bdr extention


ERROR: could not open extension control file
"opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file or
directory



Regards
Durgamahesh Manne


Re: regarding bdr extension

2018-10-01 Thread Durgamahesh Manne
On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver 
wrote:

> On 9/28/18 8:41 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > This is regarding bdr extension issue. I got below error at the time i
> > have tried to create the bdr extention
> >
> >
> > ERROR: could not open extension control file
> > "opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file
> > or directory
>
> OS and version?
>
> How are you installing BDR, from package or source?
>
> What was the exact command that led to the error?
>
> Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?
>
> >
> >
> >
> > Regards
> > Durgamahesh Manne
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



 Hi

OS and version?

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/";
SUPPORT_URL="http://help.ubuntu.com/";
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/";
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

database version is 10

2) How are you installing BDR, from package or source?

i am trying to install bdr by using postgres 10 one click installer

3) What was the exact command that led to the error?

 after  i have installed postgres 10  i ran CREATE EXTENSION bdr led to
error

4) Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?

  i ran CREATE EXTENSION bdr which led to below error

  > ERROR: could not open extension control file
> "opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file
> or directory



   Regards

Durgamahesh Manne


Re: regarding bdr extension

2018-10-01 Thread Durgamahesh Manne
On Mon, Oct 1, 2018 at 7:34 PM Adrian Klaver 
wrote:

> On 10/1/18 1:08 AM, Durgamahesh Manne wrote:
> >
> >
> > On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 9/28/18 8:41 AM, Durgamahesh Manne wrote:
> >  > Hi
> >  >
> >  > This is regarding bdr extension issue. I got below error at the
> > time i
> >  > have tried to create the bdr extention
> >  >
> >  >
> >  > ERROR: could not open extension control file
> >  > "opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No
> > such file
> >  > or directory
> >
> > OS and version?
> >
> > How are you installing BDR, from package or source?
> >
> > What was the exact command that led to the error?
> >
> > Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?
> >
> >  >
> >  >
> >  >
> >  > Regards
> >  > Durgamahesh Manne
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
> >
> >
> >   Hi
> >
> >  OS and version?
> >
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=16.04
> > DISTRIB_CODENAME=xenial
> > DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
> > NAME="Ubuntu"
> > VERSION="16.04.3 LTS (Xenial Xerus)"
> > ID=ubuntu
> > ID_LIKE=debian
> > PRETTY_NAME="Ubuntu 16.04.3 LTS"
> > VERSION_ID="16.04"
> > HOME_URL="http://www.ubuntu.com/";
> > SUPPORT_URL="http://help.ubuntu.com/";
> > BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/";
> > VERSION_CODENAME=xenial
> > UBUNTU_CODENAME=xenial
> >
> > database version is 10
> >
> > 2) How are you installing BDR, from package or source?
> >
> > i am trying to install bdr by using postgres 10 one click installer
>
> This one?:
>
> https://www.postgresql.org/download/linux/
>
> Since you are on Ubuntu why not use the PGDG repo?:
>
> https://www.postgresql.org/download/linux/ubuntu/
>
> >
> > 3) What was the exact command that led to the error?
> >
> >   after  i have installed postgres 10  i ran CREATE EXTENSION bdr led to
> > error
> >
> > 4) Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?
>
> Understood, but the question remains does that file exist in the
> location specified above?
>
> If it does then there is possibly a permissions problem. If it does not
> then either the extension was installed somewhere else or it was not
> installed at all.
>
> >
> >i ran CREATE EXTENSION bdr which led to below error
> >
> >> ERROR: could not open extension control file
> >opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file

>> or directory> "
> >
> >
> >
> > Regards
> >
> > Durgamahesh Manne
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



 Hi  sir

* Since you are on Ubuntu why not use the PGDG repo?:

 in my company env  i am using pg installer to manage the postgres servers

does this pgdg repository useful for configuration of bdr v3 ?

If yes then i can use it for bdr

please let me know some information about pgdg repository in detail

* Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?

   bdr.control  file not available


Regards

Durgamahesh Manne


Re: regarding bdr extension

2018-10-03 Thread Durgamahesh Manne
On Tue, Oct 2, 2018 at 7:22 PM Andreas Kretschmer 
wrote:

>
>
> Am 02.10.2018 um 15:29 schrieb Adrian Klaver:
> >
> >>
> >> does this pgdg repository useful for configuration of bdr v3 ?
> >
> > BDR is a third party extension from 2nd Quadrant, they would be the
> > ones to ask about configuring.
>
> i said it already, BDR3 is not for public, only for our customers. You
> will need a own support contract.
>
>
> Durgamahesh Manne, please contact us, if you are interesst in BDR version
> 3.
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
Ok, thank you.  i will contact you for bdr contract support if my company
needs to use bdr v3


Regarding varchar max length in postgres

2018-10-05 Thread Durgamahesh Manne
Hi

please let me know the max length of varchar  & text in postgres



Regards

Durgamahesh Manne


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
wrote:

> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > please let me know the max length of varchar  & text in postgres
>
> https://www.postgresql.org/docs/10/static/datatype-character.html
> >
> >
> >
> > Regards
> >
> > Durgamahesh Manne
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


 Hi

Thank you for this information

as per the postgresql documentation

 If character varying is used without length specifier, the type accepts
strings of any size

but varchar does not accept more than this


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne 
wrote:

>
>
> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
> wrote:
>
>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>> > Hi
>> >
>> > please let me know the max length of varchar  & text in postgres
>>
>> https://www.postgresql.org/docs/10/static/datatype-character.html
>> >
>> >
>> >
>> > Regards
>> >
>> > Durgamahesh Manne
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>
>
>  Hi
>
> Thank you for this information
>
> as per the postgresql documentation   please ignore incomplete message i
> mailed to respected community members
>
>  If character varying is used without length specifier, the type accepts
> strings of any size
>
> but varchar does not accept more than this 10485760 value
>

   create table test(id serial primary key, str varchar(10485761));

 ERROR: length for type varchar cannot exceed 10485760


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne 
wrote:

>
>
> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
>> wrote:
>>
>>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>>> > Hi
>>> >
>>> > please let me know the max length of varchar  & text in postgres
>>>
>>> https://www.postgresql.org/docs/10/static/datatype-character.html
>>> >
>>> >
>>> >
>>> > Regards
>>> >
>>> > Durgamahesh Manne
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>
>>
>>  Hi
>>
>> Thank you for this information
>>
>
  as per the postgresql documentation

 If character varying is used without length specifier, the type accepts
strings of any size

but varchar does not accept more than this 

>
>> as per the postgresql documentation   please ignore incomplete above
>> message i mailed to respected community members
>>
>>  If character varying is used without length specifier, the type accepts
>> strings of any size
>>
>> but varchar does not accept more than this 10485760 value
>>
>
>create table test(id serial primary key, str varchar(10485761));
>
>  ERROR: length for type varchar cannot exceed 10485760
>


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:42 PM Durgamahesh Manne 
wrote:

>
>
> On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
>>> wrote:
>>>
>>>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>>>> > Hi
>>>> >
>>>> > please let me know the max length of varchar  & text in postgres
>>>>
>>>> https://www.postgresql.org/docs/10/static/datatype-character.html
>>>> >
>>>> >
>>>> >
>>>> > Regards
>>>> >
>>>> > Durgamahesh Manne
>>>>
>>>>
>>>> --
>>>> Adrian Klaver
>>>> adrian.kla...@aklaver.com
>>>
>>>
>>>  Hi
>>>
>>> Thank you for this information
>>>
>>
>   as per the postgresql documentation
>
>  If character varying is used without length specifier, the type accepts
> strings of any size
>
> but varchar does not accept more than this 
>
>>
>>> as per the postgresql documentation   please ignore incomplete above
>>> message i mailed to respected community members
>>>
>>>  If character varying is used without length specifier, the type
>>> accepts strings of any size
>>>
>>> but varchar does not accept more than this 10485760 value
>>>
>>
>>create table test(id serial primary key, str varchar(10485761));
>>
>>  ERROR: length for type varchar cannot exceed 10485760
>>
>

as thomas said that

Further down on that page the overall limit is documented:

In any case, the longest possible character string that can be stored
is about 1 GB

So the part that you quoted implicitly means "accepts strings of any size
- up to the maximum of 1GB"
Maybe it makes sense to make that clearer at that point.


was there any specific reason that you have given max length for varchar is
limited to 10485760 value?

why you have not given max length for varchar is unlimited like text
datatype ?

character varying(*n*), varchar(*n*)variable-length with limit
character(*n*), char(*n*)fixed-length, blank padded
textvariable unlimited length



Regards


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 3:11 PM Thomas Kellerer  wrote:

> Durgamahesh Manne schrieb am 15.10.2018 um 11:18:
> > was there any specific reason that you have given max length for varchar
> is limited to 10485760 value?
> >
> > why you have not given max length for varchar is unlimited like text
> datatype ?
> >
> > |character varying(/|n|/)|, |varchar(/|n|/)|variable-length with limit
> > |character(/|n|/)|, |char(/|n|/)|fixed-length, blank padded
> > |text|variable unlimited length
>
> It "text" type is a "character string" just like all the other character
> types and thus is also limited to 1GB
>
> "text", "varchar", "character varying" and "character" are all identical
> in how they are stored and processed.
>
> Thomas
>
>
> Thank you for this information
>
>
>


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 7:54 PM Tom Lane  wrote:

> Durgamahesh Manne  writes:
> >>> If character varying is used without length specifier, the type
> >>> accepts strings of any size
> >>> but varchar does not accept more than this 10485760 value
>
> You're confusing the size of string that can be stored with the
> largest value accepted for "n" in "varchar(n)".  This is documented,
> in the same place that people have been pointing you to:
>
> In any case, the longest possible character string that can be stored
> is about 1 GB. (The maximum value that will be allowed for n in the
> ---
> data type declaration is less than that. It wouldn't be useful to
> 
> change this because with multibyte character encodings the number of
> characters and bytes can be quite different. If you desire to store
> long strings with no specific upper limit, use text or character
> varying without a length specifier, rather than making up an arbitrary
> length limit.)
>
> As you found out, the limit for "n" is ~ 10 million.
>
> In principle, we could have allowed it to be as much as 1Gb divided by
> the maximum character length of the database's encoding, but it did
> not seem like a great idea for the limit to be encoding-dependent.
>
> As the last sentence in the doc paragraph points out, the preferred
> thing to do if you just want to allow very long strings is to leave
> off "(n)" altogether.
>
> The subtext here, which maybe we ought to state in a more in-your-face
> way, is that if you use char(N) or varchar(N) without a concrete
> application-driven reason why N has to be that particular value,
> no more or less, then You're Doing It Wrong.  Artificially-chosen
> column width limits are a bad idea left over from the days of
> punched cards.  The reason the limit on N is much smaller than it
> could theoretically be is that column declarations with very large
> N are, without exception, violations of this principle.
>
> regards, tom lane
>



Hi sir

>>> If character varying is used without length specifier, the  datatype
>>> accepts strings of any size up to maximum of 1GB as  i found this info
in pgdg doc

I have not used  this max length 10485760 value at varchar in table of db
as well as i have not confused about this maximium length of the string for
varchar upto 1GB

I have used this column datatype varchar with out using any limit

I have checked with more than above value by creating table test
with create table test(id serial primary key, str varchar(10485761)) as an
example

ERROR: length for type varchar cannot exceed 10485760


text variable unlimited length
character varying(*n*), varchar(*n*) variable-length with limit
as per the documented text is with unlimited length and varchar variable
length is with limit 1GB

So i need unlimited length data type for required column of the table for
storing the large values

is there any issue to use unlimited length datatype  text  for the required
column of the table instead of using varchar ?



Regards

Durgamahesh Manne


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 9:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> So i need unlimited length data type for required column of the table for
>> storing the large values
>> is there any issue to use unlimited length datatype  text  for the
>> required column of the table instead of using varchar ?
>>
>
> Between the two you should use the "text" data type for those columns.
> You will need to describe your use case in more detail if you want input as
> to whether you need to use the large object API instead.
>
> The database cannot store an unlimited amount of data in a single
> row+column (cell) - you will still encounter a physical limit to the number
> of bytes able to be stored in a single cell when using text.
>
> David J.
>


Hi sir

 i need to store more values on required column of the table by using text
than varchar

you said that physical limit needs to be stored values  in column of the
table

text variable unlimited length


Thank you for this valuable information

 PostgreSQL is always no 1 world s leading open source  RDBMS



 I request you all community members to provide built in bdr v3 version
replication for public as multimaster replication is on high priority
against other dbms


Re: Regarding varchar max length in postgres

2018-10-16 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 9:52 PM Adrian Klaver 
wrote:

> On 10/15/18 8:56 AM, Durgamahesh Manne wrote:
>
>
> >   I request you all community members to provide built in bdr v3 version
> > replication for public as multimaster replication is on high priority
> > against other dbms
>
> BDR v3 is third party extension from 2ndQuadrant, it is not community
> code. Elements of it have found there into the community code as logical
> replication, but that is as far as it has gotten. You will need to take
> this up with 2ndQuadrant.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


Thank you for this information sir


Regarding db dump with Fc taking very long time to completion

2019-08-30 Thread Durgamahesh Manne
Hi
To respected international postgresql team

I am using postgresql 11.4 version
I have scheduled logical dump job which runs daily one time at db level
There was one table that has write intensive activity for every 40 seconds
in db
The size of the table is about 88GB
 Logical dump of that table is taking more than 7 hours to be completed

 I need to reduce to dump time of that table that has 88GB in size


Regards
Durgamahesh Manne


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari  wrote:

> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
>  wrote:
> >  Logical dump of that table is taking more than 7 hours to be completed
> >
> >  I need to reduce to dump time of that table that has 88GB in size
>
> Good luck!
> I would see two possible solutions to the problem:
> 1) use physical backup and switch to incremental (e..g, pgbackrest)
> 2) partition the table and backup single pieces, if possible
> (constraints?) and be assured it will become hard to maintain (added
> partitions, and so on).
>
> Are all of the 88 GB be written during a bulk process? I guess no, so
> maybe partitioning you can avoid locking the whole dataset and reduce
> contention (and thus time).
>
> Luca
>


Hi respected postgres team

  Are all of the 88 GB be written during a bulk process?
   NO
 Earlier table size was 88gb
 Now table size is about 148 GB
 Is there any way to reduce dump time when i take dump of the table which
has 148gb in size without creating partiton on that table has 148gb in size
?


Regards
Durgamahesh Manne


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari  wrote:

> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>  wrote:
> >  Is there any way to reduce dump time when i take dump of the table
> which has 148gb in size without creating partition* on that table has 148gb
> in size ?
>
> I would at least try a parallel dump: should it be -j option to pg_dump.
> I'm not sure it is an effective parallelism against a single table.
>
> Luca
>

Hi

parallel jobs with pg_dump be effective for the database which contains lot
of tables&its dependent objects

parallel jobs with pg_dump can not be effective against a single table


Regards
Durgamahesh Manne


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:22 PM Durgamahesh Manne 
wrote:

>
>
> On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari  wrote:
>
>> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>>  wrote:
>> >  Is there any way to reduce dump time when i take dump of the table
>> which has 148gb in size without creating partition* on that table has 148gb
>> in size ?
>>
>> I would at least try a parallel dump: should it be -j option to pg_dump.
>> I'm not sure it is an effective parallelism against a single table.
>>
>> Luca
>>
>
> Hi
>
> parallel jobs with pg_dump can* be effective for the database which
> contains lot of tables&its dependent objects
>
> parallel jobs with pg_dump can not be effective against a single table
>
>
> Regards
> Durgamahesh Manne
>
>
>


Regarding automatic table partitioning in PostgreSQL 12 version is possible or not

2020-01-17 Thread Durgamahesh Manne
Hi
 To the respected PostgreSQL international team

Please let me know that automatic table partitioning is possible in pgsql
12 or not without using trigger function



Regards
Durgamahesh Manne


Regarding automatic table partitioning without using trigger function in pgsql 12 is possible or not

2020-01-17 Thread Durgamahesh Manne
Hi
 To the respected PostgreSQL international team

Please let me know that automatic table partitioning without using trigger
function is possible in pgsql 12 or not ?



Regards

Durgamahesh Manne


Re: Regarding automatic table partitioning in PostgreSQL 12 version is possible or not

2020-01-17 Thread Durgamahesh Manne
On Fri, Jan 17, 2020 at 7:43 PM Stephen Frost  wrote:

> Greetings,
>
> * Durgamahesh Manne (maheshpostgr...@gmail.com) wrote:
> > Please let me know that automatic table partitioning is possible in pgsql
> > 12 or not without using trigger function
>
> The approach I'd recommend for automatically getting new table partitions
> created, at least today, is to use pg_partman, which is a very handy
> extension for working with partitions in PG.
>
> Thanks,
>
> Stephen
>

Hi sir


Thank you for this valuable information


Regarding db performance improvement

2020-04-29 Thread Durgamahesh Manne
Hi to PGDG TEAM


kernel.shmmax=18446744073692774399

kernel.shmall=18446744073692774399

 I am using linux ubuntu server(16.04) to manage pgsql. So  kernel default
value for kernel.shmax was 18446744073692774399 & kernel default value for
kernel.shmall was 18446744073692774399

I am using postgresql 12.1 version and ram was 32gb and 8 cpu's

1) Do i need to change the default values for kernel.shmmax &kernel.shmall
parameters by debugging this below script


Does pgsql have performance improvement well by changing the kernel.shmmax
and kernel.shmall default values ?
 #!/bin/bash
# simple shmsetup script
page_size=`getconf PAGE_SIZE`
phys_pages=`getconf _PHYS_PAGES`
shmall=`expr $phys_pages / 2`
shmmax=`expr $shmall \* $page_size`
echo kernel.shmmax = $shmmax
echo kernel.shmall = $shmall



Regards
Durgamahesh Manne


Regarding creation of gin index on column that has varchar datatype

2020-05-23 Thread Durgamahesh Manne
Hi

Respected to PGDG GLOBAL TEAM

I am getting this error( ERROR:  data type character varying has no default
operator class for access method "gin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type ) while i try to create gin index on
vch_message column of slp01 table (CREATE INDEX ON slp01 using
gin(vch_message);)

vch_message column has lot of this info like
{"requestBody":{"firstName":"SALVATORE","lastName":"NAPOLITANO","email":"
remuda...@hotmail.com","personID":"C48268","ClientSSN":"153520698","advisorPersonID":["10214","15270","15271","15272"]},"header":{"cache-control":"no-cache","content-type":"application/json","authorization":"Basic
Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json,
text/json, text/x-json, text/javascript, application/xml, text/xml"

NOTE:i have created pg_trgm based ginindex on this vch_message of slp01
table but it occupied more disk space hence i deleted trgm based gin index

please help in creating gin index on vch_message column  of slp01 table

Regards
Durgamahesh Manne


Re: Regarding creation of gin index on column that has varchar datatype

2020-05-23 Thread Durgamahesh Manne
On Sat, May 23, 2020 at 6:50 PM Andreas Kretschmer 
wrote:

>
>
> Am 23.05.20 um 12:37 schrieb Durgamahesh Manne:
> > Hi
> >
> > Respected to PGDG GLOBAL TEAM
> >
> > I am getting this error( ERROR:  data type character varying has no
> > default operator class for access method "gin"
> > HINT:  You must specify an operator class for the index or define a
> > default operator class for the data type ) while i try to create gin
> > index on vch_message column of slp01 table (CREATE INDEX ON slp01
> > using gin(vch_message);)
> >
> > vch_message column has lot of this info like
> > {"requestBody":{"firstName":"SALVATORE","lastName":"NAPOLITANO","email":"
> remuda...@hotmail.com
> > <mailto:remuda...@hotmail.com>","personID":"C48268","ClientSSN":"153520698","advisorPersonID":["10214","15270","15271","15272"]},"header":{"cache-control":"no-cache","content-type":"application/json","authorization":"Basic
>
> >
> Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json,
>
> > text/json, text/x-json, text/javascript, application/xml, text/xml"
> >
> >
>
>
> the string looks like JSON (with missing the correct end of the string).
> Please check. If you are storing JSON or JSONB - Values, you should use
> the proper datatype -JSON/JSONB. In this case you can create an
> GIN-Index on it.
>
>
> Regards, Andreas
> (and please don't email me private the same question)
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
> Hi Andreas
>

Thank you for this information


Regards

Durgamahesh Manne


Regarding Tds_fdw

2018-11-28 Thread Durgamahesh Manne
Hi

Respected community members

I have configured tds_fdw on postgres server.. I have created multiple
foreign tables related to sql server as of now i could run select queries
with out any issues

  i got  this error ERROR: cannot insert into foreign table "pgsql"  when i
tried to insert the data into foerign table

1) Is there any way to run insert delete update  queries on foreign tables ?



Regards

durgamahesh manne


Re: Regarding Tds_fdw

2018-11-28 Thread Durgamahesh Manne
On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule 
wrote:

> Hi
>
> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
> maheshpostgr...@gmail.com> napsal:
>
>> Hi
>>
>> Respected community members
>>
>> I have configured tds_fdw on postgres server.. I have created multiple
>> foreign tables related to sql server as of now i could run select queries
>> with out any issues
>>
>>   i got  this error ERROR: cannot insert into foreign table "pgsql"  when
>> i tried to insert the data into foerign table
>>
>> 1) Is there any way to run insert delete update  queries on foreign
>> tables ?
>>
>
> I can read on page
> https://support.google.com/hangouts/answer/3110347?hl=en&ref_topic=2944918&vid=0-737329123936-1543400907610
>
> "The current version does not yet support JOIN push-down, or write
> operations."
>
> Regards
>
> Pavel
>
>
>>
>>
>>
>> Regards
>>
>> durgamahesh manne
>>
>


 Hi

are there any fdw which supports dml operation on  sql server foreign
tables i have created in pg server ?



Regards


Re: Regarding Tds_fdw

2018-11-28 Thread Durgamahesh Manne
On Wed, Nov 28, 2018 at 6:31 PM Durgamahesh Manne 
wrote:

>
>
> On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule 
> wrote:
>
>> Hi
>>
>> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
>> maheshpostgr...@gmail.com> napsal:
>>
>>> Hi
>>>
>>> Respected community members
>>>
>>> I have configured tds_fdw on postgres server.. I have created multiple
>>> foreign tables related to sql server as of now i could run select queries
>>> with out any issues
>>>
>>>   i got  this error ERROR: cannot insert into foreign table "pgsql"
>>> when i tried to insert the data into foerign table
>>>
>>> 1) Is there any way to run insert delete update  queries on foreign
>>> tables ?
>>>
>>
>> I can read on page
>> https://support.google.com/hangouts/answer/3110347?hl=en&ref_topic=2944918&vid=0-737329123936-1543400907610
>>
>> "The current version does not yet support JOIN push-down, or write
>> operations."
>>
>> Regards
>>
>> Pavel
>>
>>
>>>
>>>
>>>
>>> Regards
>>>
>>> durgamahesh manne
>>>
>>
>
>
>  Hi
>
> are there any fdws which supports dml operation on  sql server foreign
> tables i have created in pg server ?
>
>
>
> Regards
>
>
>


Regarding query execution for long time

2019-01-28 Thread Durgamahesh Manne
Hi

below query is being executed for long time

Select
distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
ltrim(rtrim(TFA.client_account_key)) as vchAccountNumber,
concat(TFA.first_name, ' ', TFA.last_name) as vchAccountName,
concat('AP:TIAA', TFA.routing_number) as vchRepLabel,
null as vchOpenDate, TFA.address1 as vchAccountTiteLine2,
TFA.address2 as vchAccountTitleLine3,null as vchAccountType,
TFA.address1 as vchPrimaryContact, 'TIAA-CREF' as Custodian,
TFA.routing_number as vchCustodian_RepId, null as vchCustodianRepName,
CONCAT ('TIAA-CREF:',ltrim(rtrim(client_account_key)),':',
ltrim(rtrim(ssnumber))) as vchAccountKey,
null as vchFeedsAccountType
from accounts as TFA
join client3 as CL on
ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
left join account3 as AC on
ltrim(rtrim(AC."vchCustodianAccountNumber"))=ltrim(rtrim(TFA.client_account_key))
where AC."iInsightAccountID" is null;

 query is being executed for long time even after i have created required
indexes on columns of the tables
please help for fast query execution

Regards
durgamahesh manne


Re: Regarding query execution for long time

2019-01-28 Thread Durgamahesh Manne
On Mon, Jan 28, 2019 at 6:34 PM Ron  wrote:

> On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > below query is being executed for long time
> >
> > Select
> > distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
> > ltrim(rtrim(TFA.client_account_key)) as vchAccountNumber,
> > concat(TFA.first_name, ' ', TFA.last_name) as vchAccountName,
> > concat('AP:TIAA', TFA.routing_number) as vchRepLabel,
> > null as vchOpenDate, TFA.address1 as vchAccountTiteLine2,
> > TFA.address2 as vchAccountTitleLine3,null as vchAccountType,
> > TFA.address1 as vchPrimaryContact, 'TIAA-CREF' as Custodian,
> > TFA.routing_number as vchCustodian_RepId, null as vchCustodianRepName,
> > CONCAT ('TIAA-CREF:',ltrim(rtrim(client_account_key)),':',
> > ltrim(rtrim(ssnumber))) as vchAccountKey,
> > null as vchFeedsAccountType
> > from accounts as TFA
> > join client3 as CL on
> ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
> > left join account3 as AC on
> >
> ltrim(rtrim(AC."vchCustodianAccountNumber"))=ltrim(rtrim(TFA.client_account_key))
> > where AC."iInsightAccountID" is null;
> >
> >  query is being executed for long time even after i have created
> required
> > indexes on columns of the tables
> > please help for fast query execution
>
> I bet this is what's causing your troubles:
> on ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
>
> The query optimizer probably can't use indexes on those tables.
>
>
> --
> Angular momentum makes the world go 'round.
>
> Hi

  why query optimizer can't use indexes on those columns of the tables


Re: Regarding query execution for long time

2019-01-28 Thread Durgamahesh Manne
On Mon, Jan 28, 2019 at 8:41 PM Adrian Klaver 
wrote:

> On 1/28/19 5:04 AM, Ron wrote:
> > On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> >> Hi
> >>
> >> below query is being executed for long time
> >>
> >> Select
> >> distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
> >> ltrim(rtrim(TFA.client_account_key)) as vchAccountNumber,
> >> concat(TFA.first_name, ' ', TFA.last_name) as vchAccountName,
> >> concat('AP:TIAA', TFA.routing_number) as vchRepLabel,
> >> null as vchOpenDate, TFA.address1 as vchAccountTiteLine2,
> >> TFA.address2 as vchAccountTitleLine3,null as vchAccountType,
> >> TFA.address1 as vchPrimaryContact, 'TIAA-CREF' as Custodian,
> >> TFA.routing_number as vchCustodian_RepId, null as vchCustodianRepName,
> >> CONCAT ('TIAA-CREF:',ltrim(rtrim(client_account_key)),':',
> >> ltrim(rtrim(ssnumber))) as vchAccountKey,
> >> null as vchFeedsAccountType
> >> from accounts as TFA
> >> join client3 as CL on
> >> ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
> >> left join account3 as AC on
> >>
> ltrim(rtrim(AC."vchCustodianAccountNumber"))=ltrim(rtrim(TFA.client_account_key))
>
> >>
> >> where AC."iInsightAccountID" is null;
> >>
> >>  query is being executed for long time even after i have created
> >> required indexes on columns of the tables
> >> please help for fast query execution
> >
> > I bet this is what's causing your troubles:
> > on ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
>
> There is also the question of why not do?:
>
> btrim(Cl.vchssnumber) = btrim(TFA.ssnumber)
>
> >
> > The query optimizer probably can't use indexes on those tables.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
hi

CREATE INDEX idx10 on account3 USING btree
(ltrim(rtrim("vchCustodianAccountNumber")));
CREATE INDEX idx11 on accounts USING btree
(ltrim(rtrim(client_account_key)));
CREATE INDEX idx12 on accounts USING btree (ltrim(rtrim(ssnumber)));
 CREATE INDEX idx13 on client3 USING btree (ltrim(rtrim(vchssnumber)));

after i have created indexes on columns by adding the ltrim rtrim functions

query took very less to execution


Regards

durgamahesh manne


Regarding pgaudit log_directory

2019-03-29 Thread Durgamahesh Manne
Hi
Respected pgsql team

please let me know the pgaudit parameter to store pgaudit log files only

i don't want to store pgaudit log files at pgsql log_directory file
location



Regards

durgamahesh manne


Re: Regarding pgaudit log_directory

2019-03-29 Thread Durgamahesh Manne
On Fri, Mar 29, 2019 at 8:58 PM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 29/3/19 5:15 μ.μ., Durgamahesh Manne wrote:
>
> Hi
> Respected pgsql team
>
> please let me know the pgaudit parameter to store pgaudit log files only
>
> i don't want to store pgaudit log files at pgsql log_directory file
> location
>
> let me guess, there is none?
>
>
>
>
> Regards
>
> durgamahesh manne
>
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
   hi

  I could not find parameter related to pgaudit log_directory  .



Regards

Durgamahesh Manne


Re: Regarding pgaudit log_directory

2019-03-31 Thread Durgamahesh Manne
On Saturday, March 30, 2019, David Steele  wrote:

> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory  .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility and there are no plans for such a feature.
>
> The general solution is to use Splunk, ELK, etc. to do manipulation of the
> PostgreSQL logs.
>
> Regards,
> --
> -David
> da...@pgmasters.net
>

Hi

Please let me know open source application interface to monitor of pgaudit
logs only
As I have installed pgaudit successfully

Regards

Durgamahesh Manne


Regarding pgaudit log_directory

2019-03-31 Thread Durgamahesh Manne
On Saturday, March 30, 2019, David Steele  wrote:

> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory  .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility and there are no plans for such a feature.
>
> The general solution is to use Splunk, ELK, etc. to do manipulation of the
> PostgreSQL logs.
>
> Regards,
> --
> -David
> da...@pgmasters.net
>

Hi

Please let me know open source application interface to do manipulation of
pgaudit logs only
As I have installed pgaudit successfully

Regards

Durgamahesh Manne


Regarding pgaudit

2019-03-31 Thread Durgamahesh Manne
Hi

Respected postgres team

Please let me know open source application interface to monitor the pgaudit
log files only as I have installed pgaudit tool

Regards

Durgamahesh Manne


dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
hi

Respected international pgsql team

pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non superuser(ravi)
pershing=> insert into hyd (id,name) values('2','delhi');
INSERT 0 1
here data inserted


pershing=# grant UPDATE on public.hyd to ravi;
GRANT
i have granted update command access to non superuser(ravi)

pershing=> update public.hyd set id = 3 where name = 'hyderabad';
ERROR:  permission denied for relation hyd
please let me know what is the issue with update command


Re: dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
On Thu, Apr 4, 2019 at 3:55 PM Ron  wrote:

> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
> > hi
> > Respected international pgsql team
> >
> > pershing=# grant INSERT on public.hyd to ravi;
> > GRANT
> > i have granted insert command access to non superuser(ravi)
> > pershing=> insert into hyd (id,name) values('2','delhi');
> > INSERT 0 1
> > here data inserted
> >
> >
> > pershing=# grant UPDATE on public.hyd to ravi;
> > GRANT
> > i have granted update command access to non superuser(ravi)
> >
> > pershing=> update public.hyd set id = 3 where name = 'hyderabad';
> > ERROR:  permission denied for relation hyd
> > please let me know what is the issue with update command
>
> Are there any triggers on public.hyd which modify other tables?
>
> --
> Angular momentum makes the world go 'round.
>
>
> Hi
there are no triggers on public.hyd table

Regards
durgamahesh manne


Re: dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
On Thu, Apr 4, 2019 at 4:14 PM Durgamahesh Manne 
wrote:

>
>
>
> On Thu, Apr 4, 2019 at 3:55 PM Ron  wrote:
>
>> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
>> > hi
>> > Respected international pgsql team
>> >
>> > pershing=# grant INSERT on public.hyd to ravi;
>> > GRANT
>> > i have granted insert command access to non superuser(ravi)
>> > pershing=> insert into hyd (id,name) values('2','delhi');
>> > INSERT 0 1
>> > here data inserted
>> >
>> >
>> > pershing=# grant UPDATE on public.hyd to ravi;
>> > GRANT
>> > i have granted update command access to non superuser(ravi)
>> >
>> > pershing=> update public.hyd set id = 3 where name = 'hyderabad';
>> > ERROR:  permission denied for relation hyd
>> > please let me know what is the issue with update command
>>
>> Are there any triggers on public.hyd which modify other tables?
>>
>> --
>> Angular momentum makes the world go 'round.
>>
>>
>> Hi
> there are no triggers on public.hyd table
>
> Regards
> durgamahesh manne
>
>
>

 Hi

i found that there was bug for grant access on update command for non
superusers

grant access on update command worked fine on 9.3 version


please i request you to fix grant access bug on update command for
nonsupeuser asap in the next pg version 10.8



Regards

durgamahesh manne


Re: dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
*From:* Durgamahesh Manne 
*Sent:* Thursday, April 4, 2019 12:07 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* dbuser acess privileges



hi



Respected international pgsql team



pershing=# grant INSERT on public.hyd to ravi;

GRANT

i have granted insert command access to non superuser(ravi)

pershing=> insert into hyd (id,name) values('2','delhi');

INSERT 0 1

here data inserted





pershing=# grant UPDATE on public.hyd to ravi;

GRANT

i have granted update command access to non superuser(ravi)



pershing=> update public.hyd set id = 3 where name = 'hyderabad';

ERROR:  permission denied for relation hyd

please let me know what is the issue with update command



On Thu, Apr 4, 2019 at 5:55 PM Patrick FICHE 
wrote:

> Hi,
>
>
>
> If I’m not wrong, UPDATE requires SELECT permission as the UPDATE
> statement needs to read the data to be updated.
>
> So, you should probably add GRANT SELECT and you get it work.
>
>
>
> Regards,
>
>
>
> *Patrick Fiche*
>
> Database Engineer, Aqsacom Sas.
>
> *c.* 33 6 82 80 69 96
>
>
>
> [image: 01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]
> <http://www.aqsacom.com/>
>
>
>
>
>
>
>

 Hi  Patrick Fiche  as per your info

Grant access on update command is worked  AFTER I DID EXECUTE THIS   GRANT
SELECT ON TABLE(hyd)  for non superuser in 10.6


But

 in 9.3 versionGrant access on update command is worked  EVEN I DID NOT
EXECUTE THIS   GRANT SELECT ON TABLE(hyd)  for non superuser


Regards

Durgamahesh Manne


Regarding TailNMail

2019-04-05 Thread Durgamahesh Manne
Hi Sir

 EMAIL: maheshpostgr...@gmail.com 
MAILSUBJECT: HOST Postgres errors UNIQUE : NUMBER

INCLUDE: ERROR:
INCLUDE: FATAL:
INCLUDE: PANIC:

FILE1: /mnt2/staging/postgres/data/log/LATEST
LASTFILE1: /mnt2/staging/postgres/data/log/postgresql-2019-04-05_00.log
OFFSET1: 10630272


i have configured tailnmail in server

 how to get dbobject errors for specific user(ravi) from postgreslog

suppose username is ravi

INCLUDE:  ravi ERROR:
is this correct approach to get errors related specific db user (ravi) from
pglog?


Regards

Durgamahesh Manne


Regarding use case of epoch to generate nanoseconds precision

2024-05-22 Thread Durgamahesh Manne
Hi

Postgres supports only upto microseconds (6 decimal precision).
How do we generate timestamp with nanoseconds as rds postgres not
supported timestamp9 extension ?
Is there a way to generate timestamp with nanoseconds precision on
pg_partman with epoch without typecasting or with typecasting  ?

p_epoch =>   (to_timestamp(control column))
Here what is the control column?
How to run it with the create_parent function of partman?

Here as per the pg_partman doc
p_epoch - tells pg_partman that the control column is an integer type, but
actually represents an epoch time value. Valid values for this option are:
'seconds', 'milliseconds', 'nanoseconds', and 'none'. The default is
'none'. All table names will be time-based. In addition to a normal index
on the control column, be sure you create a functional, time-based index on
the control column (to_timestamp(control column)) as well so this works
efficiently.


Regards,
Durga Mahesh Manne


Regarding tables detach concurrently with run_maintenance_proc()

2024-07-11 Thread Durgamahesh Manne
Hi Respected Team

By default proc() does not detach tables concurrently. How do we implement
tables detach concurrently without blocking running sessions in prod.
why this is very critical to implement for pg_partman.
if this is not available yet on 5.1.0 then when can i expect to get it
if already there then please let me know the implementation of detaching
tables concurrently
Any best way to implement the same please  with out concurrently how do we
ensure data integrity and consistency

Regards
Durga Mahesh


Regarding vacuum freeze locking mechanism

2024-07-17 Thread Durgamahesh Manne
Hi

Do new inserts block while performing vacuum freeze operations ?

when autovacuum runs , it will freeze the transaction ID (TXID) of the
table it's working on. This means that any transactions that started before
autovacuum began will be allowed to complete.but new transactions will be
blocked until the autovacuum finishes.

Could you please provide more clarity on this? Which lock triggers on the
tables are being used by freeze?

Your response is highly appreciated

Regards,
Durga Mahesh


Re: Regarding vacuum freeze locking mechanism

2024-07-18 Thread Durgamahesh Manne
Hi David.

Excellent response from you .Great

Regards,
Durga Mahesh

On Thu, Jul 18, 2024 at 11:28 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, July 17, 2024, Durgamahesh Manne 
> wrote:
>
>>
>> Could you please provide more clarity on this? Which lock triggers on the
>> tables are being used by freeze?
>>
>
>  https://www.postgresql.org/docs/current/explicit-locking.html
>
> Share update exclusive
>
> David J.
>
>


Fwd: Regarding tables detach concurrently with run_maintenance_proc()

2024-07-19 Thread Durgamahesh Manne
Hi Respected Team

with pg_partman By default proc() does not detach tables concurrently. How
do we implement tables detach concurrently without blocking other sessions
Here queries not using date column to detach tables with
run_maintenance_proc() which is not using concurrently  based on the
retention policy which leads to scan all available child tables hence need
to trigger this proc with concurrently option to avoid blocking other child
tables beyond rentention policy while running statements on them

Do we have any other alternative rather than using pg_partman()?

your response is valuable

Regards,
Durga Mahesh


Re: Fwd: Regarding tables detach concurrently with run_maintenance_proc()

2024-07-19 Thread Durgamahesh Manne
On Fri, Jul 19, 2024 at 7:55 PM Christoph Berg  wrote:

> Re: Durgamahesh Manne
> > with pg_partman By default proc() does not detach tables concurrently.
> How
> > do we implement tables detach concurrently without blocking other
> sessions
> > Here queries not using date column to detach tables with
> > run_maintenance_proc() which is not using concurrently  based on the
> > retention policy which leads to scan all available child tables hence
> need
> > to trigger this proc with concurrently option to avoid blocking other
> child
> > tables beyond rentention policy while running statements on them
>
> You might have more success by filing pg_partman issues at
> https://github.com/pgpartman/pg_partman/issues
>
> > Do we have any other alternative rather than using pg_partman()?
>
> Well you can just run the same commands manually that pg_partman would
> run.
>
> Christoph
>

Hi
You might have more success by filing pg_partman issues at
https://github.com/pgpartman/pg_partman/issues >>> okay
 My intention is to have any other extension other than pg_partman to
manage table partitions manually

Regards
Durga Mahesh


Regarding publish_via_partiton_root with pglogical

2024-07-20 Thread Durgamahesh Manne
Hi

Respected Team

I know the use case of implementing the partitions with publication and
subscription of built-in logical replication
CREATE PUBLICATION dbz_publication FOR TABLE betplacement.bet WITH
(publish_via_partition_root = true); This will use parent table to replica
data changes to target from source
But
Could you please provide a logic to implement the same with pglogical ?
Do i need to add this publish_via_partition_root to
pglogical.replication_set_add_table functionality ?
you response is valuable in this aspect

Regards,
Durga Mahesh


pg_repack job scheduling with pg_cron

2024-07-24 Thread Durgamahesh Manne
Hi
Respected Team

I have tried to setup pg_repack job with pg_cron but i could not implement
it properly

Is there any way to schedule pg_repack job with pg_cron ?

If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)

your response is highly valuable

Regards.
Durga Mahesh


Scheduling pg_repack job with pg_cron

2024-07-30 Thread Durgamahesh Manne
Hi
Respected Team

Is there any way to schedule a pg_repack job with pg_cron within the
instance ?

If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)

your response is highly valuable

Regards.
Durga Mahesh


Soluton on Lock:extend issue

2024-08-10 Thread Durgamahesh Manne
Hi

Lock:extend  (version 14.11)
How to resolve lock:extend issue even there is surge in concurrent sessions
(insert and update) on same table

Reduction on concurrent sessions would be solved but that is not a solution
and
there is no network bandwidth issue

Is there any parameter to tune to minimize the impact on lock:extend ?

Regards,
Durga Mahesh


autovacuum freeze recommendations at table level

2024-08-10 Thread Durgamahesh Manne
Hi  Respected Team,

Could you please let me know that how this freeze parameters work
Update query runs on table  through which data being modified daily in this
case
Total records in table is about 20lakhs
current setting for this table is
Access method: heap
if it reaches > 0.1*200+1000 = 2,10,000 as per the formula autovacuum
triggers
Options: fillfactor=85, autovacuum_vacuum_cost_delay=0,
autovacuum_vacuum_cost_limit=3000, parallel_workers=6,
autovacuum_vacuum_scale_factor=0.1, autovacuum_vacuum_threshold=1000,
autovacuum_freeze_max_age=2000,
autovacuum_multixact_freeze_max_age=2000, autovacuum_freeze_min_age=0

How autovacuum freeze parameters work.Give me some recommendations to
improve the performance better than now
Ex :ALTER TABLE table SET (
   autovacuum_freeze_max_age = 2000,(2 crores)
   autovacuum_multixact_freeze_max_age = 2000,(2 crores)
   autovacuum_freeze_min_age = 0
);
Regards,
Durga Mahesh


Re: autovacuum freeze recommendations at table level

2024-08-13 Thread Durgamahesh Manne
Hi Semab

ALTER TABLE table SET (
   autovacuum_freeze_max_age = 6000,(6 crores)
   autovacuum_multixact_freeze_max_age = 6000,(6 crores)
   autovacuum_freeze_min_age = 0
);
I set this but autovacuum to prevent wraparound runs for every 10 minutes
on the table being modified in this case
How to minimize the impact of this ?
There is no option to set naptime at table level
I could see total records around 40 lakhs that are being modified
Toast considers large objects but I use parameters without toast.

Regards,
Durga Mahesh Manne


On Mon, Aug 12, 2024 at 10:07 PM semab tariq  wrote:

> Hi Durga
>
> *autovacuum_freeze_max_age* specifies the maximum age (in transactions)
> that a table's tuples can reach before a vacuum is forced to prevent
> transaction ID wraparound. when the age of the oldest tuple in the table
> exceeds this value, an autovacuum is triggered to freeze the tuples.
> *Recommendation = 2000 -> 15000 *
>
> *autovacuum_multixact_freeze_max_age *It is similar to above, but applies
> to multi-transaction IDs (used for shared row locks). when the age of the
> oldest multi-transaction ID exceeds this value, an autovacuum is triggered
> to freeze the multi-transaction IDs.
> *Recommendation = 2000 -> 15000*
>
> *autovacuum_freeze_min_age* specifies the minimum age (in transactions)
> that a tuple must reach before it is considered for freezing. Lowering this
> value can cause more frequent freezing, which can increase the overhead of
> autovacuum.
> *Recommendation = 0 -> 5000*
>
> Thanks, Semab
>
> On Sun, Aug 11, 2024 at 11:12 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi  Respected Team,
>>
>> Could you please let me know that how this freeze parameters work
>> Update query runs on table  through which data being modified daily in
>> this case
>> Total records in table is about 20lakhs
>> current setting for this table is
>> Access method: heap
>> if it reaches > 0.1*200+1000 = 2,10,000 as per the formula autovacuum
>> triggers
>> Options: fillfactor=85, autovacuum_vacuum_cost_delay=0,
>> autovacuum_vacuum_cost_limit=3000, parallel_workers=6,
>> autovacuum_vacuum_scale_factor=0.1, autovacuum_vacuum_threshold=1000,
>> autovacuum_freeze_max_age=2000,
>> autovacuum_multixact_freeze_max_age=2000, autovacuum_freeze_min_age=0
>>
>> How autovacuum freeze parameters work.Give me some recommendations to
>> improve the performance better than now
>> Ex :ALTER TABLE table SET (
>>autovacuum_freeze_max_age = 2000,(2 crores)
>>autovacuum_multixact_freeze_max_age = 2000,(2 crores)
>>autovacuum_freeze_min_age = 0
>> );
>> Regards,
>> Durga Mahesh
>>
>


Performance degrade on insert on conflict do nothing

2024-09-10 Thread Durgamahesh Manne
Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
 *8vcpus and 32gb ram
   Number of calls per sec 1600 at this time 42% of cpu utilized
 Max in ms 33.62 per call
 Avg in ms  0.17 per call
   Table
"dictionary.dictionary"
 Column |   Type   | Collation | Nullable | Default
 | Storage  | Compression | Stats target | Description
+--+---+--+--+--+-+--+-
 lang   | text |   | not null |
 | extended | |  |
 tid| text |   | not null |
 | extended | |  |
 basetid| text |   | not null |
 | extended | |  |
 sportid| text |   |  |
 | extended | |  |
 brandid| text |   | not null |
 | extended | |  |
 translatedtext | text |   |  |
 | extended | |  |
 objecttype | text |   |  |
 | extended | |  |
 createdat  | timestamp with time zone |   | not null | now()
 | plain| |  |
 modified   | timestamp with time zone |   | not null | now()
 | plain| |  |
 modifiedby | text |   | not null |
''::text | extended | |  |
 version| integer  |   | not null | 0
 | plain| |  |
Indexes:
"pk_dictionary" PRIMARY KEY, btree (lang, tid)
"idx_dictionary_basetid" btree (basetid)
"idx_dictionary_brandid" btree (brandid)
"idx_dictionary_objecttype" btree (objecttype)
"idx_dictionary_sportid" btree (sportid)
Triggers:
i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW EXECUTE
FUNCTION update_createdat_col()
i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
ROW EXECUTE FUNCTION update_modified_col()
Access method: heap
How do we improve this query performance without taking more cpu?

Regards,
Durga Mahesh


Recommendations on improving the insert on conflict do nothing performance

2024-09-11 Thread Durgamahesh Manne
Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
 *8vcpus and 32gb ram
   Number of calls per sec 1600 at this time 42% of cpu utilized
 Max in ms 33.62 per call
 Avg in ms  0.17 per call
   Table
"dictionary.dictionary"
 Column |   Type   | Collation | Nullable | Default
 | Storage  | Compression | Stats target | Description
+--+---+--+--+--+-+--+-
 lang   | text |   | not null |
 | extended | |  |
 tid| text |   | not null |
 | extended | |  |
 basetid| text |   | not null |
 | extended | |  |
 sportid| text |   |  |
 | extended | |  |
 brandid| text |   | not null |
 | extended | |  |
 translatedtext | text |   |  |
 | extended | |  |
 objecttype | text |   |  |
 | extended | |  |
 createdat  | timestamp with time zone |   | not null | now()
 | plain| |  |
 modified   | timestamp with time zone |   | not null | now()
 | plain| |  |
 modifiedby | text |   | not null |
''::text | extended | |  |
 version| integer  |   | not null | 0
 | plain| |  |
Indexes:
"pk_dictionary" PRIMARY KEY, btree (lang, tid)
"idx_dictionary_basetid" btree (basetid)
"idx_dictionary_brandid" btree (brandid)
"idx_dictionary_objecttype" btree (objecttype)
"idx_dictionary_sportid" btree (sportid)
Triggers:
i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW EXECUTE
FUNCTION update_createdat_col()
i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
ROW EXECUTE FUNCTION update_modified_col()
Access method: heap
How do we improve this query performance without taking more cpu?

Regards,
Durga Mahesh


Re: Performance degrade on insert on conflict do nothing

2024-09-12 Thread Durgamahesh Manne
Hi Greg

Great response from you this worked

Regards
Durga Mahesh

On Wed, Sep 11, 2024 at 7:12 PM Greg Sabino Mullane 
wrote:

> On Wed, Sep 11, 2024 at 1:02 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>>  createdat  | timestamp with time zone |   | not null | now()
>>| plain| |  |
>>  modified   | timestamp with time zone |   | not null | now()
>>| plain| |  |
>> Triggers:
>> i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW
>> EXECUTE FUNCTION update_createdat_col()
>> i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
>> ROW EXECUTE FUNCTION update_modified_col()
>>
>
>
>> How do we improve this query performance without taking more cpu?
>>
>
> 1. Lose the first trigger. I don't know exactly what those functions do,
> but if they are only for updating those columns, just remove the first one
> and let postgres handle it via NOT NULL DEFAULT.
>
> 2. Change the second trigger to just ON UPDATE
>
> 3. Remove that second trigger as well, and have the app populate that
> column (assuming that is all it does), e.g. UPDATE dictionary SET lang =
> 'foo', modified = DEFAULT, modified_by = 'foo' where tid = 12345;
>
> 4. Remove any indexes you do not absolutely need
>
> Cheers,
> Greg
>
>


Re: Recommendations on improving the insert on conflict do nothing performance

2024-09-12 Thread Durgamahesh Manne
Hi Muhammad Usman Khan

I have already set required values of params.Here issue was about
triggers.I have resolved this issue

Regards
Durga Mahesh

On Thu, Sep 12, 2024 at 10:05 AM Muhammad Usman Khan 
wrote:

> Hi,
> You can use the following approaches for optimization:
>
>- Instead of inserting one row at a time, perform bulk inserts, which
>will reduce the overhead of each individual transaction
>- Partitioning can improve write performance by splitting the data
>into smaller, more manageable chunks
>- Tune postgres configuration like
>work_mem = '16MB'
>shared_buffers = '8GB'
>effective_cache_size = '24GB'
>
>
> On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>> insert into
>> dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
>> values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
>>  *8vcpus and 32gb ram
>>Number of calls per sec 1600 at this time 42% of cpu utilized
>>  Max in ms 33.62 per call
>>  Avg in ms  0.17 per call
>>Table
>> "dictionary.dictionary"
>>  Column |   Type   | Collation | Nullable |
>> Default  | Storage  | Compression | Stats target | Description
>>
>> +--+---+--+--+--+-+--+-
>>  lang   | text |   | not null |
>>| extended | |  |
>>  tid| text |   | not null |
>>| extended | |  |
>>  basetid| text |   | not null |
>>| extended | |  |
>>  sportid| text |   |  |
>>| extended | |  |
>>  brandid| text |   | not null |
>>| extended | |  |
>>  translatedtext | text |   |  |
>>| extended | |  |
>>  objecttype | text |   |  |
>>| extended | |  |
>>  createdat  | timestamp with time zone |   | not null | now()
>>| plain| |  |
>>  modified   | timestamp with time zone |   | not null | now()
>>| plain| |  |
>>  modifiedby | text |   | not null |
>> ''::text | extended | |  |
>>  version| integer  |   | not null | 0
>>| plain| |  |
>> Indexes:
>> "pk_dictionary" PRIMARY KEY, btree (lang, tid)
>> "idx_dictionary_basetid" btree (basetid)
>> "idx_dictionary_brandid" btree (brandid)
>> "idx_dictionary_objecttype" btree (objecttype)
>> "idx_dictionary_sportid" btree (sportid)
>> Triggers:
>> i_dictionary_createdat BEFORE INSERT ON dictionary FOR EACH ROW
>> EXECUTE FUNCTION update_createdat_col()
>> i_dictionary_modified BEFORE INSERT OR UPDATE ON dictionary FOR EACH
>> ROW EXECUTE FUNCTION update_modified_col()
>> Access method: heap
>> How do we improve this query performance without taking more cpu?
>>
>> Regards,
>> Durga Mahesh
>>
>


Recommendations on Improving the debezium performance even on medium workload

2024-09-12 Thread Durgamahesh Manne
Hi Team

How do we improve the debezium performance?
Recommendations at kafka configuration side
Agenda is to minimize the lag during moderate or high work load on db

default > poll.interval.ms = 500ms
Recommended value for balanced performance > 1000ms
Recommended value for high throughput > 1000ms to 5000ms





default > max.batch.size = 2048 allocated > 2560
Recommended value for balanced performance > 500 to 1000
Recommended value for high throughput > 2000 or 5000



default > max.queue.size = 8192 allocated > 1
Recommended value for balanced performance > 7500 (1000 to 1)
Recommended value for high throughput > 1

based on kafka load we can optimize debezium connector performance



default > heartbeat.interval.ms = 0 allocated > 5000ms
Recommended value for this > 5000ms to 1ms


Are these helpful  ?

Need auto connectivity even on connector(slot) failure

please let me know your recommendations please


Regards,

Durga Mahesh


pglogical selective child replication between different partition interval tables

2024-09-12 Thread Durgamahesh Manne
Hi Respected Team
Here source side tables are created with 7 days partition interval here we
have data within them

changelog_event_p20240830
changelog_event_p20240906


Target side tables are created with 3 days partition interval
Structure of tables at both side is same

Would it be possible to replicate selective child table
(changelog_event_p20240906)  to target with logical replication or pglogical

create publication with where clause does work in this scenario?

Regards.
Durga Mahesh


Generate the valid snapshot during creation of for the purpose of taking pg_dump with --snapshot option

2024-09-15 Thread Durgamahesh Manne
Hi pgdg team

How to generate snapshot_name for required slot on latest versions of
postgresql

Below is the generated slot and snapshot info on postgres 10
osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput; slot_name
| consistent_point | snapshot_name | output_plugin
-+--+-+---
lsr_sync_01 | 0/C000110 | 0003-0002-1 | pgoutput (1 row) But the
same way i could not able to generate on pgsql 14
Here archiving=> select
pg_create_logical_replication_slot('pgsql','pgoutput');
 pg_create_logical_replication_slot

 (pgsql,3080/12DD6468)

archiving=> select pg_export_snapshot();
 pg_export_snapshot
-
 00C7-015658E6-1
(1 row)
in this case pg_dump fails with invalid snapshot identifier when i use it
as --snapshot= 00C7-015658E6-1 in pg_dump command line

Need to generate valid snapshot for required slot in order to take backup

Regards,
Durga Mahesh


Regarding snapshot generation during creation of logical slot in pgsql14

2024-09-15 Thread Durgamahesh Manne
Hi,

How do we generate the snapshot for the logical slot  which can be used in
pg_dump command options with --snapshot in pgsql 14?

Any function to check the snapshot of the slot after creating slot with
pg_create_logical_replication_slot('pgdg',pgoutput);


Regards,
Durga Mahesh


About to find all foreign tables are being used by sproc and view and function

2021-04-26 Thread Durgamahesh Manne
Hi  Respected Team

I need to find foreign tables  used in function and sproc and view
How to find all foreign tables being used by sproc,view,function



Thanks & Regards
Durgamahesh Manne


About to know the info of foreign table reference used in any of call ,views,functions

2021-04-27 Thread Durgamahesh Manne
Hi Team

By using the system view and function

"I need to get the info of foreign table reference used in any of call
,views,functions"

I found info of views and functions and sprocs that are executed frequently
through application using pg_stat_user_functions view

Please help  for the info i need


Thanks & Regards
Durgamahesh Manne


Re: How to Copy/Load 1 billions rows into a Partition Tables Fast

2024-10-14 Thread Durgamahesh Manne
On Mon, 14 Oct, 2024, 23:29 Wong, Kam Fook (TR Technology), <
kamfook.w...@thomsonreuters.com> wrote:

> I am trying to copy a table (Postgres) that is close to 1 billion rows
> into a Partition table (Postgres) within the same DB.  What is the fastest
> way to copy the data?   This table has 37 columns where some of which are
> text data types.
>
> Thank you
> Kam Fook Wong
>
>
> This e-mail is for the sole use of the intended recipient and contains
> information that may be privileged and/or confidential. If you are not an
> intended recipient, please notify the sender by return e-mail and delete
> this e-mail and any attachments. Certain required legal entity disclosures
> can be accessed on our website:
> https://www.thomsonreuters.com/en/resources/disclosures.html
>

Hi Kam Fook Wong,

You can achieve it with pg_bulkload utility

Regards,
Durga Mahesh

>


Re: Inefficient use of index scan on 2nd column of composite index during concurrent activity

2024-10-14 Thread Durgamahesh Manne
On Fri, 11 Oct, 2024, 23:33 Durgamahesh Manne, 
wrote:

>
>
> On Fri, Oct 11, 2024 at 9:57 PM Greg Sabino Mullane 
> wrote:
>
>> On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>> composite key (placedon,id)
>>> In concurrent mode if i use id at where clause then query plan for that
>>> id column changes
>>>
>>> How to mitigate it rather than use seperate index for id to continue
>>> without change in query plan (index scan) during concurrent activity
>>>
>>
>> Why the focus on "concurrent mode"? Perhaps explain what you mean by that.
>>
>> Speaking of explain, it might help if you show us the explain plans and
>> how they are not coming out how you want. Also the table definitions, but
>> feel free to not show columns unrelated to the problem.
>>
>> Cheers,
>> Greg
>>
>> Hi Greg
> Thanks for your quick response
>
>  Partitioned table "test"
>Column|   Type   | Collation | Nullable |
> Default | Storage  | Compression | Stats target | Description
>
> -+--+---+--+-+--+-+--+-
>  id   | bigint   |   | not null |
> | plain| |  |
>  externalbetid   | text |   |  |
>   | extended | |  |
>  externalsystem  | text |   |  |
>   | extended | |  |
>  placedon| timestamp with time zone |   | not null |
>   | plain| |  |
>  txnstep | integer  |   |  |
>   | plain| |  |
>  txnstage| text |   |  |
>   | extended | |  |
>  txnstatus   | text |   |  |
>   | extended | |  |
>  "pmk_test" PRIMARY KEY, btree (id, placedon) REPLICA IDENTITY
> if use this  (id,placedon) when running select query then no issues bez
> select picks up first column of composite key
> select * from test where id = '4234';
>  Append  (cost=0.14..42.14 rows=19 width=1355) (actual time=0.177..0.186
> rows=1 loops=1)
>->  Index Scan using test_p2023_07_id_idx on test_p2023_07 test_1
>  (cost=0.14..2.38 rows=1 width=1874) (actual time=0.009..0.009 rows=0
> loops=1)
>  Index Cond: (id = '4234'::text)
>->  Index Scan using test_p2023_08_id_idx on test_p2023_08 test_2
>  (cost=0.14..2.38 rows=1 width=1848) (actual time=0.005..0.005 rows=0
> loops=1)
>  Index Cond: (id = '4234'::text)
>  Planning Time: 0.100 ms
>  Execution Time: 0.40 ms
>
> >>>>>if i change constraint order (placedon,id) then  in this case
>
> I could see same index scan with explain analyze  for 1 call or 2 calls
>
> Here concurrent mode means you are already aware (no of calls increases
> concurrently)
> Sudden cpu spike i have observed which is unusual(more than needed)   when
> no of calls increased concurrently on that query
>
> Based on that info  i suspected that query plan changed hence raised
> question here  this is what i faced with mentioned columns order related to
> problem
>
>
> Example for better understanding to you
> in oracle
> CREATE INDEX idx_orders_customer_date ON orders (customer_id, order_date);
> SELECT /*+ INDEX(orders idx_orders_customer_date) */ *
> FROM orders
> WHERE order_date = '2024-01-01';
> I am not sure how this works . this is the example gathered for you
>
> I hope you can understand  . Sorry i can't explain more than this much
>
>
> Regards,
> Durga Mahesh
>



Hi PGDG

In oracle

Example for better understanding to you

CREATE INDEX idx_orders_customer_date ON orders (customer_id, order_date);

SELECT /*+ INDEX(orders idx_orders_customer_date) */ *
FROM orders
WHERE order_date = '2024-01-01';

I am not sure how this works in oracle . this is the example gathered for
reference

In the similar way

Do we have anything in postgres like oracle ?

Regards,
Durga Mahesh

>


Re: Inefficient use of index scan on 2nd column of composite index during concurrent activity

2024-10-15 Thread Durgamahesh Manne
On Tue, 15 Oct, 2024, 15:15 David Rowley,  wrote:

> On Sat, 12 Oct 2024 at 02:28, Durgamahesh Manne
>  wrote:
> > Second column of composite index not in use effectively with index scan
> when using second column at where clause
> >
> > I have composite index on (placedon,id) of test
> > When quering  select * from test where id = '4234';
> > Value of id changes and during concurrent activity and cpu utilization
> increased toomuch  that i have observed which means query plan changed why
> >
> > I could see index scan with explain for it
> >
> > Is there any way to keep index scan for it during even on concurrency
> rather than seperate index on second column of composite index ?
>
> It sounds like you might be asking about something we call "index skip
> scans". Currently, PostgreSQL does not support these, however there is
> work being done to add support and that might arrive in PG18.
>
> There is some information about a possible workaround in [1] which may
> be of use to you.
>
> David
>
> [1] https://wiki.postgresql.org/wiki/Loose_indexscan


Hi David

Thanks you for valuable info

Regards,
Durga Mahesh

>
>


Re: Question on indexes

2024-10-11 Thread Durgamahesh Manne
On Fri, Oct 11, 2024 at 6:18 PM Greg Sabino Mullane 
wrote:

> (please start a new thread in the future rather than replying to an
> existing one)
>
> You cannot query on b and use an index on (a,b) as you observed. However,
> you can have two indexes:
>
> index1(a)
> index2(b)
>
> Postgres will be able to combine those when needed in the case where your
> WHERE clause needs to filter by both columns. So then you no longer need
> the two-column index.
>
> Cheers,
> Greg
>


Hi greg

Mail sent you with a new thread. composite key is on partitioned table

Regards,
Durga Mahesh


Re: Question on indexes

2024-10-11 Thread Durgamahesh Manne
On Fri, Oct 11, 2024 at 5:00 PM Greg Sabino Mullane 
wrote:

> if we have any column with large string/text values and we want it to be
>> indexed then there is no choice but to go for a hash index. Please correct
>> me if I'm wrong.
>>
>
> There are other strategies / solutions, but we would need to learn more
> about your use case.
>
> Cheers,
> Greg
>
>

Hi Respected Team

How do we enforce the secondary column of composite index to index scan on
concurrent activity in postgres?
Second column of composite index not in use effectively with index scan
when using second column at where clause

I have composite index on (placedon,id) of test
When querying  select * from test where id = '4234';
Value of id changes and during concurrent activity and cpu utilization
increased too much  that i have observed which means query plan changed why

I could see index scan with explain for it  on singal call or double calls

Is there any way to keep an index scan for it during concurrency rather
than a separate index on the second column of the composite index ?


Regards,
Durga Mahesh


Re: Inefficient use of index scan on 2nd column of composite index during concurrent activity

2024-10-11 Thread Durgamahesh Manne
On Fri, Oct 11, 2024 at 9:57 PM Greg Sabino Mullane 
wrote:

> On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> composite key (placedon,id)
>> In concurrent mode if i use id at where clause then query plan for that
>> id column changes
>>
>> How to mitigate it rather than use seperate index for id to continue
>> without change in query plan (index scan) during concurrent activity
>>
>
> Why the focus on "concurrent mode"? Perhaps explain what you mean by that.
>
> Speaking of explain, it might help if you show us the explain plans and
> how they are not coming out how you want. Also the table definitions, but
> feel free to not show columns unrelated to the problem.
>
> Cheers,
> Greg
>
> Hi Greg
Thanks for your quick response

 Partitioned table "test"
   Column|   Type   | Collation | Nullable |
Default | Storage  | Compression | Stats target | Description
-+--+---+--+-+--+-+--+-
 id   | bigint   |   | not null |
  | plain| |  |
 externalbetid   | text |   |  |
  | extended | |  |
 externalsystem  | text |   |  |
  | extended | |  |
 placedon| timestamp with time zone |   | not null |
  | plain| |  |
 txnstep | integer  |   |  |
  | plain| |  |
 txnstage| text |   |  |
  | extended | |  |
 txnstatus   | text |   |  |
  | extended | |  |
 "pmk_test" PRIMARY KEY, btree (id, placedon) REPLICA IDENTITY
if use this  (id,placedon) when running select query then no issues bez
select picks up first column of composite key
select * from test where id = '4234';
 Append  (cost=0.14..42.14 rows=19 width=1355) (actual time=0.177..0.186
rows=1 loops=1)
   ->  Index Scan using test_p2023_07_id_idx on test_p2023_07 test_1
 (cost=0.14..2.38 rows=1 width=1874) (actual time=0.009..0.009 rows=0
loops=1)
 Index Cond: (id = '4234'::text)
   ->  Index Scan using test_p2023_08_id_idx on test_p2023_08 test_2
 (cost=0.14..2.38 rows=1 width=1848) (actual time=0.005..0.005 rows=0
loops=1)
 Index Cond: (id = '4234'::text)
 Planning Time: 0.100 ms
 Execution Time: 0.40 ms

>>>>>if i change constraint order (placedon,id) then  in this case

I could see same index scan with explain analyze  for 1 call or 2 calls

Here concurrent mode means you are already aware (no of calls increases
concurrently)
Sudden cpu spike i have observed which is unusual(more than needed)   when
no of calls increased concurrently on that query

Based on that info  i suspected that query plan changed hence raised
question here  this is what i faced with mentioned columns order related to
problem


Example for better understanding to you
in oracle
CREATE INDEX idx_orders_customer_date ON orders (customer_id, order_date);
SELECT /*+ INDEX(orders idx_orders_customer_date) */ *
FROM orders
WHERE order_date = '2024-01-01';
I am not sure how this works . this is the example gathered for you

I hope you can understand  . Sorry i can't explain more than this much


Regards,
Durga Mahesh


Lock contention issues with repack

2024-10-22 Thread Durgamahesh Manne
Hi

DEBUG: Poll returned: 1

LOG: Command finished in worker 1: CREATE UNIQUE INDEX index_3199790649 ON
repack.table_5832724 USING btree (id)
DEBUG:  swap 
DEBUG: query failed: ERROR: canceling statement due to statement timeout
DETAIL: query was: LOCK TABLE offer.market IN ACCESS EXCLUSIVE MODE
DEBUG: query failed: ERROR: canceling statement due to statement timeout
DETAIL: query was: LOCK TABLE offer.market IN ACCESS EXCLUSIVE MODE

While running pg_repack i was facing this issue as concurrent selects and
updates on tables being repacked .However raised the same to
pg_repack channel . No issues with low traffic on db
How to resolve this issue even on moderate db workload ?


Fwd: Inefficient use of index scan on 2nd column of composite index during concurrent activity

2024-10-11 Thread Durgamahesh Manne
-- Forwarded message -
From: Durgamahesh Manne 
Date: Mon, Oct 7, 2024 at 10:01 AM
Subject: Inefficient use of index scan on 2nd column of composite index
during concurrent activity
To: 


Hi team

Second column of composite index not in use effectively with index scan
when using second column at where clause

I have composite index on (placedon,id) of test
When quering  select * from test where id = '4234';
Value of id changes and during concurrent activity and cpu utilization
increased toomuch  that i have observed which means query plan changed why

I could see index scan with explain for it

Is there any way to keep index scan for it during even on concurrency
rather than seperate index on second column of composite index ?

Hope everyone understand this

Regards,
Durga Mahesh


Hi Greg

you mentioned that below

(please start a new thread in the future rather than replying to an
existing one)

You cannot query on b and use an index on (a,b) as you observed. However,
you can have two indexes:

index1(a)
index2(b)

Postgres will be able to combine those when needed in the case where your
WHERE clause needs to filter by both columns. So then you no longer need
the two-column index.



Hi Greg ,

Here not using composite index on ordinary table.
Composite index that i use on partitioned table is mandatory for use to
replicate data to target using pglogical  (sorry this is not mentioned
earlier)

composite key (placedon,id)
In concurrent mode if i use id at where clause then query plan for that id
column changes

How to mitigate it rather than use seperate index for id to continue
without change in query plan (index scan) during concurrent activity

I hope you understand this

Regards,
Durga Mahesh





Cheers,
Greg


Pgsql 16 to 14 using builtin logical

2024-10-25 Thread Durgamahesh Manne
Hi

Can we replicate 16 to 14 using builtin logical similarly pglogical?

Regards
Durga Mahesh


Re: Synchronize the dump with a logical slot with --snapshot

2024-09-25 Thread Durgamahesh Manne
Hi Team

--snapshot=snapshotname
(Use the specified synchronized snapshot when making a dump of the database

This option is useful when needing to synchronize the dump with a logical
replication slot) as per the pgdg

How do we synchronize the dump with a logical replication slot with
--snapshot?

I am using the postgresql 14 version which supports only
pg_create_logical_replication_slot. How to generate a internal snapshot
with it ?

Below CREATE_REPLICAION_SLOT not supported by postgresql 14

example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
  slot_name | consistent_point | snapshot_name | output_plugin
-+--+-+---
 lsr_sync_01 | 0/C000110 | 0003-0002-1 | pgoutput

Regards,

Durga Mahesh



On Fri, 20 Sept, 2024, 01:27 Durgamahesh Manne, 
wrote:

> Hi Team
>
> --snapshot=*snapshotname*
>
> (Use the specified synchronized snapshot when making a dump of the database
>
> This option is useful when needing to synchronize the dump with a logical
> replication slot) as per the pgdg
>
> How do we synchronize the dump with a logical replication slot
> with --snapshot?
>
> I am using the postgresql 14 version which supports only
> pg_create_logical_replication_slot. How to generate a snapshot with it ?
>
> Below CREATE_REPLICAION_SLOT not supported by postgresql 14
>
> example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
>   slot_name  | consistent_point |snapshot_name| output_plugin
> -+--+-+---
>  lsr_sync_01 | 0/C000110| 0003-0002-1 | pgoutput
>
> Regards,
>
> Durga Mahesh
>


Re: Synchronize the dump with a logical slot with --snapshot

2024-09-28 Thread Durgamahesh Manne
Hi justin

I raised question based on the reference link
https://opensource-db.com/unlocking-initial-sync-for-logical-replication-in-aws-rds-for-postgresql/
  .. you can also go through it to see the steps
This worked on postgres 10version but on postgres 14 I can go through the
info you provided to implement the same

Thanks for your valuable information

Regards,
Durga Mahesh


On Sat, 28 Sept, 2024, 23:10 Justin,  wrote:

>
>
> On Sat, Sep 28, 2024 at 1:04 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi Team
>>
>> Can anyone respond to my question from respected team members ?
>>
>> Durga Mahesh
>>
>> On Thu, Sep 26, 2024 at 2:23 AM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>> Hi Team
>>>
>>> --snapshot=snapshotname
>>> (Use the specified synchronized snapshot when making a dump of the
>>> database
>>>
>>> This option is useful when needing to synchronize the dump with a
>>> logical replication slot) as per the pgdg
>>>
>>> How do we synchronize the dump with a logical replication slot with
>>> --snapshot?
>>>
>>> I am using the postgresql 14 version which supports only
>>> pg_create_logical_replication_slot. How to generate a internal snapshot
>>> with it ?
>>>
>>> Below CREATE_REPLICAION_SLOT not supported by postgresql 14
>>>
>>> example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
>>>   slot_name | consistent_point | snapshot_name | output_plugin
>>> -+--+-+---
>>>  lsr_sync_01 | 0/C000110 | 0003-0002-1 | pgoutput
>>>
>>> Regards,
>>>
>>> Durga Mahesh
>>>
>>>
>>>
>>> On Fri, 20 Sept, 2024, 01:27 Durgamahesh Manne, <
>>> maheshpostgr...@gmail.com> wrote:
>>>
>>>> Hi Team
>>>>
>>>> --snapshot=*snapshotname*
>>>>
>>>> (Use the specified synchronized snapshot when making a dump of the
>>>> database
>>>>
>>>> This option is useful when needing to synchronize the dump with a
>>>> logical replication slot) as per the pgdg
>>>>
>>>> How do we synchronize the dump with a logical replication slot
>>>> with --snapshot?
>>>>
>>>> I am using the postgresql 14 version which supports only
>>>> pg_create_logical_replication_slot. How to generate a snapshot with it ?
>>>>
>>>> Below CREATE_REPLICAION_SLOT not supported by postgresql 14
>>>>
>>>> example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
>>>>   slot_name  | consistent_point |snapshot_name| output_plugin
>>>> -+--+-+---
>>>>  lsr_sync_01 | 0/C000110| 0003-0002-1 | pgoutput
>>>>
>>>> Regards,
>>>>
>>>> Durga Mahesh
>>>>
>>>
>
> Hi Durgamahesh,
>
> I am not sure what you are after with matching pg_dump and replication
> slot together unless you are trying to get a dump to handle the initial
> data sync. There is not a benefit to doing that as the WAL is going to
> build up on the publisher...
>
> You have to create a snapshot using the export function
>
> https://www.postgresql.org/docs/current/sql-set-transaction.html
>
>
> https://www.postgresql.org/docs/17/functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION
>
> Then you can create the logical replication slot with using that slotname
> option
>
> https://www.postgresql.org/docs/17/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-SLOT-NAME
> and no sync option.
>
> Then you tell pg_dump to use that snapshot name snapshot with this option
> --snapshot=snapshotname
>
> https://www.postgresql.org/docs/current/app-pgdump.html
>
> Once pg_restore is done on the destination , you can create a subscription
> using that slotname option probably and specify copy_data = false.
>
> Keep in mind the WAL will build up during this process,  not sure what the
> benefit would be just allowing logical replication to do the initial sync.
>
>
> Thanks
> Justin
>
>


Re: Synchronize the dump with a logical slot with --snapshot

2024-09-27 Thread Durgamahesh Manne
Hi Team

Can anyone respond to my question from respected team members ?

Durga Mahesh

On Thu, Sep 26, 2024 at 2:23 AM Durgamahesh Manne 
wrote:

> Hi Team
>
> --snapshot=snapshotname
> (Use the specified synchronized snapshot when making a dump of the database
>
> This option is useful when needing to synchronize the dump with a logical
> replication slot) as per the pgdg
>
> How do we synchronize the dump with a logical replication slot with
> --snapshot?
>
> I am using the postgresql 14 version which supports only
> pg_create_logical_replication_slot. How to generate a internal snapshot
> with it ?
>
> Below CREATE_REPLICAION_SLOT not supported by postgresql 14
>
> example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
>   slot_name | consistent_point | snapshot_name | output_plugin
> -+--+-+---
>  lsr_sync_01 | 0/C000110 | 0003-0002-1 | pgoutput
>
> Regards,
>
> Durga Mahesh
>
>
>
> On Fri, 20 Sept, 2024, 01:27 Durgamahesh Manne, 
> wrote:
>
>> Hi Team
>>
>> --snapshot=*snapshotname*
>>
>> (Use the specified synchronized snapshot when making a dump of the
>> database
>>
>> This option is useful when needing to synchronize the dump with a logical
>> replication slot) as per the pgdg
>>
>> How do we synchronize the dump with a logical replication slot
>> with --snapshot?
>>
>> I am using the postgresql 14 version which supports only
>> pg_create_logical_replication_slot. How to generate a snapshot with it ?
>>
>> Below CREATE_REPLICAION_SLOT not supported by postgresql 14
>>
>> example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
>>   slot_name  | consistent_point |snapshot_name| output_plugin
>> -+--+-+---
>>  lsr_sync_01 | 0/C000110| 0003-0002-1 | pgoutput
>>
>> Regards,
>>
>> Durga Mahesh
>>
>


Re: Regarding publish_via_partiton_root with pglogical

2024-09-27 Thread Durgamahesh Manne
Hi

Any inputs from the PGDG team members regarding this ?

Regards
Durga Mahesh

On Mon, Jul 22, 2024 at 3:35 PM khan Affan  wrote:

> Hi Durgamahesh
>
> In pglogical extension, there is no direct equivalent to the
> publish_via_partition_root option found in the built-in logical replication
>
> you can achieve similar results when a new partition is added; you must
> add it to the replication set using pglogical.replication_set_add_table.
> Similarly existing partitions are reorganized or split, ensure to update
> the replication sets accordingly.
>
> Thanks & Regards
>
>
> *Muhammad Affan (*아판*)*
>
> *PostgreSQL Technical Support Engineer** / Pakistan R&D*
>
> Interlace Plaza 4th floor Twinhub office 32 I8 Markaz, Islamabad, Pakistan
>
> On Sat, Jul 20, 2024 at 12:00 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>>
>> Respected Team
>>
>> I know the use case of implementing the partitions with publication and
>> subscription of built-in logical replication
>> CREATE PUBLICATION dbz_publication FOR TABLE betplacement.bet WITH
>> (publish_via_partition_root = true); This will use parent table to replica
>> data changes to target from source
>> But
>> Could you please provide a logic to implement the same with pglogical ?
>> Do i need to add this publish_via_partition_root to
>> pglogical.replication_set_add_table functionality ?
>> you response is valuable in this aspect
>>
>> Regards,
>> Durga Mahesh
>>
>


Regarding use of single column as primary key on partitioned table

2024-09-27 Thread Durgamahesh Manne
Hi Team
test=>  CREATE TABLE public.bet (
betid int4 NOT NULL,
externalbetid text NULL,
externalsystem text NULL,
placedon timestamptz NULL,
createdon timestamptz NULL
) partition by list (placedon) ;
CREATE TABLE
test=> alter table public.bet add primary key (betid);
ERROR:  unique constraint on partitioned table must include all
partitioning columns
DETAIL:  PRIMARY KEY constraint on table "bet" lacks column "placedon"
which is part of the partition key.
test=>

Can't we use primary key on singal column(betid) on partitioned table
rather than using composite key (placedon,betid)?

Regards,
Durga Mahesh


Synchronize the dump with a logical slot with --snapshot

2024-09-19 Thread Durgamahesh Manne
Hi Team

--snapshot=*snapshotname*

(Use the specified synchronized snapshot when making a dump of the database

This option is useful when needing to synchronize the dump with a logical
replication slot) as per the pgdg

How do we synchronize the dump with a logical replication slot
with --snapshot?

I am using the postgresql 14 version which supports only
pg_create_logical_replication_slot. How to generate a snapshot with it ?

Below CREATE_REPLICAION_SLOT not supported by postgresql 14

example osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput;
  slot_name  | consistent_point |snapshot_name| output_plugin
-+--+-+---
 lsr_sync_01 | 0/C000110| 0003-0002-1 | pgoutput

Regards,

Durga Mahesh


Regarding logical slots allocation with logical rw

2024-11-30 Thread Durgamahesh Manne
Hi

I have 32vcpus and 128GB ram and 13 slots only created but need 18 more
logical slots needed  in this case
mwp > 32
max replication slots > 40 allocated
logical rw > 18
wal senders > 55
mpw > 6
avmw > 3
here 18+6+3 = 27 + 13 used slots  = 40
So here howmany logical slots we can create upto  based on resources

Regards,
Durga Mahesh


Subscriptions limit for 32cpus and 128gb

2024-12-13 Thread Durgamahesh Manne
Hi

Howmany subscriptions that we can create
maximum for instance that has 32vcpu and 128gb for managing logical
replication

Highwork load on 4dbs and rest of 30dbs normal workload


Regards
Durga Mahesh


Re: Postgresql database terminates abruptly with too many open files error

2025-01-19 Thread Durgamahesh Manne
-- Forwarded message -
From: Sri Mrudula Attili 
Date: Wed, 15 Jan, 2025, 17:12
Subject: Re: Postgresql database terminates abruptly with too many open
files error
To: Tom Lane 
Cc: 


Hello Tom,


The max_connections =200 and max_files_per_process =1000 as you mentioned.

So should a max_files_per_process =500 helps?


I could see from the number of connections to the databaseis not
exceeding 20. But still these 20 are causing all the damage or using up
all the openfiles quota.


Thanks,

Sri

On 14/01/2025 14:19, Tom Lane wrote:
> Sri Mrudula Attili  writes:
>>We have a postgresql VDB(virtual database- Delphix) that keeps
>> terminating due "to too many open files".
> What do you have max_connections set to, and how many actually-live
> server processes are there typically?
>
>> The number of allowed openfiles at OS level are 65000.
> I'm suspecting that you either need to increase that, decrease
> max_files_per_process (which defaults to 1000), or decrease
> max_connections.
>
>   regards, tom lane


Hi Team,

What is the final conclusion for this ?

Regards,
Durga Mahesh


Re: Records count mismatch with logical replication

2025-01-21 Thread Durgamahesh Manne
On Tue, Jan 21, 2025 at 9:24 PM Adrian Klaver 
wrote:

> On 1/21/25 04:08, Durgamahesh Manne wrote:
> > Hi  Team,
> >
> > I have publication and subscription servers .So seems data replication
> > running with minimal lag but records count mismatch with more than 10
> > thousand records between source and destination tables
> >
> > Could you please help in resolving this issue?
>
> Not without a good deal more information:
>
> 1) Postgres version on either side of the replication.
>
> 2) The replication configuration.
>
> 3) Define how lag is being calculated and what 'minimal' is.
>
> 4) Define how the record counts are being derived.
>
> 5) The network distance between the servers.
>
> >
> > Regards,
> > Durga Mahesh
> >
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
> Hi Adrian Klaver

Really Thanks for your quick response

This happened during repack lag went to more than 350Gb then gradually
decreased to minimal lag after running pg_repack

1) Postgres version on either side of the replication.

 Source(publication) :16.4
 Destination(subscription) : 14.11

2) The replication configuration.
OLAP workload archiving (32GB 8 Vcpus)
 Source : wal_level = logical
wal_sender_timeout = 30s
 max_wal_senders = 40
max_replication_slots = 20
max_logical_replication_workers = 4
wal_buffers = 64MB
commit_delay = 2000
commi_siblings = 12
wal_writer_delay = 300
wal_writer_flush_after = 1MB
bgwriter_delay = 20
min_wal_size = 8GB
max_wal_size = 32Gb

Destination :  128GB 32 vcpus
wal_level = logical
wal_receiver_timeout = 30s
 max_wal_senders = 40
max_replication_slots = 60
max_logical_replication_workers = 23
wal_buffers = 64MB
commit_delay = default
commi_siblings = default
wal_writer_delay = default
wal_writer_flush_after = default
bgwriter_delay = 20
min_wal_size = 8GB
max_wal_size = 32Gb




3) Define how lag is being calculated and what 'minimal' is.

postgres=> select
slot_name,pg_wal_lsn_diff(pg_current_wal_lsn(),restart_lsn) as bytes_behind,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) as
behind_size, active from pg_replication_slots;
  slot_name  | bytes_behind | behind_size | active
-+--+-+
 cls_eva_msa | 22906216 | 22 MB   | t

4) Define how the record counts are being derived.
  Source : archiving=> select count(*) from archiving.events_archive ;
  count
-
 1262908
(1 row)

Destination : archiving=> select count(*) from archiving.events_archive ;
  count
-
 1252062
(1 row)

5) The network distance between the servers.

 Both are under same vpc security groups

Regards
Durga Mahesh


Re: Records count mismatch with logical replication

2025-01-22 Thread Durgamahesh Manne
On Wed, 22 Jan, 2025, 03:11 Adrian Klaver, 
wrote:

> On 1/21/25 11:40, Durgamahesh Manne wrote:
> >
> >
> > On Wed, 22 Jan, 2025, 00:22 Adrian Klaver,  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> >
> >
> > On 1/21/25 10:06 AM, Durgamahesh Manne wrote:
> >
> >  >
> >  > Hi Adrian Klaver
> >  >
> >  > 22,906,216 bytes/10,846 rows  works out to 2112 bytes per row.
> >  >
> >  > Is that a reasonable per row estimate?
> >  >
> >  > Yes  sometimes would be vary
> >
> > If I am following the lag went from 350GB behind to 22MB.
> >
> > Is the issue that the lag has stalled at 22MB?
> >
> >  >
> >  > Regards,
> >  > Durga Mahesh
> >  >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
> >
> > Hi Adrian Klaver
> >
> > Is the issue that the lag has stalled at 22MB?
> >
> > Depends on load of source
> > The lag would be either decrease or increase in Kb 's and Mb's (not Gb's)
> > It s not constant as Data being replicated to target
>
> Previously you stated:
>
> "Both are under same vpc security groups"
>
> Does this refer to AWS?
>
> If so per:
>
> https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
>
> "The following diagram shows an example VPC. The VPC has one subnet in
> each of the Availability Zones in the Region, EC2 instances in each
> subnet, and an internet gateway to allow communication between the
> resources in your VPC and the internet."
>
> So where are the two Postgres instances physically located relative to
> each other?
>
> >
> > But records count varies with difference of more than 10 thousand
>
> Have you looked at the I/0 statistics between the Postgres instances?
>
> >
> > How to mitigate this issue in simplest way ?
>
> Until it is determined what is causing the lag there is no way to deal
> with it.
>
> >
> > Regards
> > Durga Mahesh
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> Hi

Hi
⁹So where are the two Postgres instances physically located relative to
each other?
Both in lreland under same vpc security groups


>
> But records count varies with difference of more than 10 thousand

Have you looked at the I/0 statistics between the Postgres instances?

Seems everything looks good with pg replication slots

Does this pg logical slot get changes function help to pull pending changes
to subscription that can be sync with publication server for real time sync
?

Regards,
Durgamahesh


Re: Records count mismatch with logical replication

2025-01-23 Thread Durgamahesh Manne
On Thu, Jan 23, 2025 at 11:24 PM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:

>
>
> On Thu, Jan 23, 2025 at 10:08 PM Adrian Klaver 
> wrote:
>
>> On 1/22/25 18:53, Durgamahesh Manne wrote:
>> >
>> >
>> >
>>
>> >  > But records count varies with difference of more than 10 thousand
>> >
>> > Have you looked at the I/0 statistics between the Postgres instances?
>> >
>> > Seems everything looks good with pg replication slots
>>
>> Except the subscriber is lagging behind the publisher.
>>
>> '... everything looks good' is an opinion not actual data.
>>
>> >
>> > Does this pg logical slot get changes function help to pull pending
>> > changes to subscription that can be sync with publication server for
>> > real time sync ?
>>
>> Are you referring to this?:
>>
>>
>> https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION
>>
>> Though I am not sure you want to do this as from above:
>>
>> "When requesting synchronous replication, each commit of a write
>> transaction will wait until confirmation is received that the commit has
>> been written to the write-ahead log on disk of both the primary and
>> standby server. The only possibility that data can be lost is if both
>> the primary and the standby suffer crashes at the same time. This can
>> provide a much higher level of durability, though only if the sysadmin
>> is cautious about the placement and management of the two servers.
>> Waiting for confirmation increases the user's confidence that the
>> changes will not be lost in the event of server crashes but it also
>> necessarily increases the response time for the requesting transaction.
>> The minimum wait time is the round-trip time between primary and standby."
>>
>> If you are not referring to above then you will need to explain further.
>>
>> >
>> > Regards,
>> > Durgamahesh
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
> Hi
>
> Source Publication Side:
> archiving=> select * from pg_replication_slots ;
> -[ RECORD 1 ]---+--
> slot_name   | cls_eva_msa
> plugin  | pgoutput
> slot_type   | logical
> datoid  | 16601
> database| archiving
> temporary   | f
> active  | t
> active_pid  | 3237
> xmin|
> catalog_xmin| 2935229621
> restart_lsn | 16C8/40CEC600
> confirmed_flush_lsn | 16C8/440FFF50
> wal_status  | reserved
> safe_wal_size   |
> two_phase   | f
> conflicting | f
> -[ RECORD 2 ]---+--
> slot_name   | cle_clm_mka
> plugin  | pgoutput
> slot_type   | logical
> datoid  | 16601
> database| archiving
> temporary   | f
> active  | t
> active_pid  | 3501
> xmin|
> catalog_xmin| 2935229621
> restart_lsn | 16C8/40CEC600
> confirmed_flush_lsn | 16C8/440FFF50
> wal_status  | reserved
> safe_wal_size   |
> two_phase   | f
> conflicting | f
> archiving=> select * from pg_stat_replication;
> client_hostname  |
> client_port  | 52506
> backend_start| 2025-01-23 16:58:04.697304+00
> backend_xmin |
> state| streaming
> sent_lsn | 16C7/BDE4BB48
> write_lsn| 16C7/BDE4BB48
> flush_lsn| 16C7/BDE4BB48
> replay_lsn   | 16C7/BDE4BB48
> write_lag| 00:00:00.002271
> flush_lag| 00:00:00.002271
> replay_lag   | 00:00:00.002271
> sync_priority| 0
> sync_state   | async
> reply_time   | 2025-01-23 17:34:39.901979+00
> -[ RECORD 2 ]+--
> pid  | 3501
> usesysid | 14604130
> usename  | archiving
> application_name | cle_clm_mka
> client_addr  | 10.80.0.168
> client_hostname  |
> client_port  | 55412
> backend_start| 2025-01-22 09:31:11.83963+00
> backend_xmin |
> state| streaming
> sent_lsn | 16C7/BDE4BB48
> write_lsn| 16C7/BDE4BB48
> flush_lsn| 16C7/BDE4BB48
> replay_lsn   | 16C7/BDE4BB48
> write_lag| 00:00:00.001642
> flush_lag| 00:00:00.023143
> replay_lag   | 00:00:00.001642
> sync_priority| 0
> sync_state   | async
> reply_time   | 2025-01-23 17:34:39.903052+00
>
> Subscription Side : archiving=> select * 

Re: Records count mismatch with logical replication

2025-01-23 Thread Durgamahesh Manne
On Thu, Jan 23, 2025 at 10:08 PM Adrian Klaver 
wrote:

> On 1/22/25 18:53, Durgamahesh Manne wrote:
> >
> >
> >
>
> >  > But records count varies with difference of more than 10 thousand
> >
> > Have you looked at the I/0 statistics between the Postgres instances?
> >
> > Seems everything looks good with pg replication slots
>
> Except the subscriber is lagging behind the publisher.
>
> '... everything looks good' is an opinion not actual data.
>
> >
> > Does this pg logical slot get changes function help to pull pending
> > changes to subscription that can be sync with publication server for
> > real time sync ?
>
> Are you referring to this?:
>
>
> https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION
>
> Though I am not sure you want to do this as from above:
>
> "When requesting synchronous replication, each commit of a write
> transaction will wait until confirmation is received that the commit has
> been written to the write-ahead log on disk of both the primary and
> standby server. The only possibility that data can be lost is if both
> the primary and the standby suffer crashes at the same time. This can
> provide a much higher level of durability, though only if the sysadmin
> is cautious about the placement and management of the two servers.
> Waiting for confirmation increases the user's confidence that the
> changes will not be lost in the event of server crashes but it also
> necessarily increases the response time for the requesting transaction.
> The minimum wait time is the round-trip time between primary and standby."
>
> If you are not referring to above then you will need to explain further.
>
> >
> > Regards,
> > Durgamahesh
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
Hi

Source Publication Side:
archiving=> select * from pg_replication_slots ;
-[ RECORD 1 ]---+--
slot_name   | cls_eva_msa
plugin  | pgoutput
slot_type   | logical
datoid  | 16601
database| archiving
temporary   | f
active  | t
active_pid  | 3237
xmin|
catalog_xmin| 2935229621
restart_lsn | 16C8/40CEC600
confirmed_flush_lsn | 16C8/440FFF50
wal_status  | reserved
safe_wal_size   |
two_phase   | f
conflicting | f
-[ RECORD 2 ]---+--
slot_name   | cle_clm_mka
plugin  | pgoutput
slot_type   | logical
datoid  | 16601
database| archiving
temporary   | f
active  | t
active_pid  | 3501
xmin|
catalog_xmin| 2935229621
restart_lsn | 16C8/40CEC600
confirmed_flush_lsn | 16C8/440FFF50
wal_status  | reserved
safe_wal_size   |
two_phase   | f
conflicting | f
archiving=> select * from pg_stat_replication;
client_hostname  |
client_port  | 52506
backend_start| 2025-01-23 16:58:04.697304+00
backend_xmin |
state| streaming
sent_lsn | 16C7/BDE4BB48
write_lsn| 16C7/BDE4BB48
flush_lsn| 16C7/BDE4BB48
replay_lsn   | 16C7/BDE4BB48
write_lag| 00:00:00.002271
flush_lag| 00:00:00.002271
replay_lag   | 00:00:00.002271
sync_priority| 0
sync_state   | async
reply_time   | 2025-01-23 17:34:39.901979+00
-[ RECORD 2 ]+--
pid  | 3501
usesysid | 14604130
usename  | archiving
application_name | cle_clm_mka
client_addr  | 10.80.0.168
client_hostname  |
client_port  | 55412
backend_start| 2025-01-22 09:31:11.83963+00
backend_xmin |
state| streaming
sent_lsn | 16C7/BDE4BB48
write_lsn| 16C7/BDE4BB48
flush_lsn| 16C7/BDE4BB48
replay_lsn   | 16C7/BDE4BB48
write_lag| 00:00:00.001642
flush_lag| 00:00:00.023143
replay_lag   | 00:00:00.001642
sync_priority| 0
sync_state   | async
reply_time   | 2025-01-23 17:34:39.903052+00

Subscription Side : archiving=> select * from pg_stat_subscription where
subname = 'cls_eva_msa';
-[ RECORD 1 ]-+--
subid | 1936652827
subname   | cls_eva_msa
pid   | 18746
relid |
received_lsn  | 16C7/FB48DFE0
last_msg_send_time| 2025-01-23 17:41:11.924562+00
last_msg_receipt_time | 2025-01-23 17:41:11.933344+00
latest_end_lsn| 16C7/FB48DFE0
latest_end_time   | 2025-01-23 17:41:11.924562+00

archiving=> select * from pg_stat_subscription where subname =
'cle_clm_mka';
-[ RECORD 1 ]-+--
subid | 1892055116
subname   | cle_clm_mka
pid

  1   2   >