storage
dFetchboolean false
plain
so please help in creating of the BRIN index on above column of the table .
Regards
Durgamahesh Manne
storage
dFetchboolean false
plain
so please help in creating of the BRIN index on above column of the table .
Regards
Durgamahesh Manne
On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman wrote:
>
>
> *From:* Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> *Sent:* Wednesday, September 19, 2018 10:04 AM
> *To:* Igor Neyman
> *Subject:* Re: Regrading brin_index on required column of the table
>
> On Wed,
On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer
wrote:
>
>
> Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> > I have created BRIN index on few columns of the table without any
> > issues. But i am unable to create BRIN index on one column of the
> > table
execution time was not reduced
Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns
SO please help in reducing the distinct query execution time
Regards
Durgamahesh Manne
On Wed, Sep 19, 2018 at 7:21 PM Durgamahesh Manne
wrote:
> Hi
> R
ime: 0.237 ms
|
| Execution time: 390252.089 ms
so i am unable to reduce the query execution time as it is taken around 7
minutes to execute with indexes & without indexes
please help in reducing the query execution time
Regards
Durgamahesh Manne
On Wed, Sep 19,
On Wed, Sep 19, 2018 at 8:27 PM Andreas Kretschmer
wrote:
>
>
> Am 19.09.2018 um 16:43 schrieb Durgamahesh Manne:
> >
> >
> > On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer
> > mailto:andr...@a-kretschmer.de>> wrote:
> >
> >
> >
On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer
wrote:
> Hi,
>
>
> the problem is there:
>
>
> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
> > Unique (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> > time=326397.551
m table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where rec."bFetch"=false and sub_head."bFetch"=false ;
the above query took around 47 sec to execute
the above query took around 7 minutes to execute with distinct
On T
On Thu, Sep 20, 2018 at 3:41 PM Durgamahesh Manne
wrote:
> hi
> as per your request
> i ran below query without distinct
>
> select sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice&qu
On Thu, Sep 20, 2018 at 3:12 PM Durgamahesh Manne
wrote:
>
>
> Hi
>
> As per your suggestion
>
>
> i ran explain analyse for distinct query
>
> the size of the table1 is 30mb
> the size of the table2 is 368kb
>
> EXPLAIN ANALYZE select distinct sub_head.
On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer
wrote:
>
>
> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
> > Query was executed at less time without distinct
> >
> > As well as query was taking around 7 minutes to complete execution
> > with distinct
>
On Thu, Sep 20, 2018 at 7:25 PM Durgamahesh Manne
wrote:
>
>
> On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>>
>>
>> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
>> > Query was executed at less time
Sort Key: j."vchFileName",
j."vchContractEntityRole", j."vchContractNumber",
j."vchContractEntityPersonalIdentifier"
|
| Sort Method: external merge Disk:
42758304kB
|
| -> Nested Loop
(cost=0.42..266305.78 rows=59959206 width=677) (actual
time=0.122..73786.837 rows=61595746 loops=1)
|
| -> Seq Scan on "table3" j
(cost=0.00..669.12 rows=25132 width=591) (actual time=0.021..28.338
rows=25132 loops=1)
|
| Filter: (NOT "bFetch")
|
| -> Index Scan using cpr_idx4
on table2 k (cost=0.42..6.92 rows=365 width=107) (actual time=0.838..2.244
rows=2451 loops=25132)
|
| Index Cond:
(("vchAgentTaxID")::text = (j.vchagenttaxid)::text)
|
| Planning time: 2.369 ms
|
| Execution time: 1807771.091 ms
So i need to execute below query at less time. please help in
optimising the complex query execution time
Regards
Durgamahesh Manne
On Fri, Sep 21, 2018 at 7:38 PM Durgamahesh Manne
wrote:
> Hi
>
> Complex query taken around 30 minutes to execute even i have
> increased work_mem value to 4GB temporarily as total ram is 16gb
>
> Explain analyze query taken around 30 minutes to execute even i have
> cr
On Fri, Sep 21, 2018 at 9:12 PM Andreas Kretschmer
wrote:
>
>
> Am 21.09.2018 um 17:13 schrieb Durgamahesh Manne:
> > query is below
>
> query and plan still not readable. Store it into a textfile and attach
> it here.
>
>
> Andreas
>
> --
> 2ndQuadra
Thank you all very much for this information
On Sat, Sep 22, 2018 at 12:38 AM Alban Hertroys wrote:
>
>
> > On 21 Sep 2018, at 17:49, Durgamahesh Manne
> wrote:
> >
> >
>
> Considering how hard you try to get rid of duplicates, I'm quite convinced
&
Hi
This is regarding bdr extension issue. I got below error at the time i have
tried to create the bdr extention
ERROR: could not open extension control file
"opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file or
directory
Regards
Durgamahesh Manne
On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver
wrote:
> On 9/28/18 8:41 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > This is regarding bdr extension issue. I got below error at the time i
> > have tried to create the bdr extention
> >
> >
> > ERROR: c
On Mon, Oct 1, 2018 at 7:34 PM Adrian Klaver
wrote:
> On 10/1/18 1:08 AM, Durgamahesh Manne wrote:
> >
> >
> > On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 9/28/18 8:
ould be the
> > ones to ask about configuring.
>
> i said it already, BDR3 is not for public, only for our customers. You
> will need a own support contract.
>
>
> Durgamahesh Manne, please contact us, if you are interesst in BDR version
> 3.
>
>
> Regards, And
Hi
please let me know the max length of varchar & text in postgres
Regards
Durgamahesh Manne
On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver
wrote:
> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > please let me know the max length of varchar & text in postgres
>
> https://www.postgresql.org/docs/10/static/datatype-character.html
> >
&g
On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne
wrote:
>
>
> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver
> wrote:
>
>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>> > Hi
>> >
>> > please let me know the max length of varchar & text
On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne
wrote:
>
>
> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver
>> wrote:
>>
>&g
On Mon, Oct 15, 2018 at 2:42 PM Durgamahesh Manne
wrote:
>
>
> On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> w
On Mon, Oct 15, 2018 at 3:11 PM Thomas Kellerer wrote:
> Durgamahesh Manne schrieb am 15.10.2018 um 11:18:
> > was there any specific reason that you have given max length for varchar
> is limited to 10485760 value?
> >
> > why you have not given max length for varcha
On Mon, Oct 15, 2018 at 7:54 PM Tom Lane wrote:
> Durgamahesh Manne writes:
> >>> If character varying is used without length specifier, the type
> >>> accepts strings of any size
> >>> but varchar does not accept more than this 10485760 value
>
> Yo
On Mon, Oct 15, 2018 at 9:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> So i need unlimited length data type for required column of the table for
>> sto
On Mon, Oct 15, 2018 at 9:52 PM Adrian Klaver
wrote:
> On 10/15/18 8:56 AM, Durgamahesh Manne wrote:
>
>
> > I request you all community members to provide built in bdr v3 version
> > replication for public as multimaster replication is on high priority
> > again
is taking more than 7 hours to be completed
I need to reduce to dump time of that table that has 88GB in size
Regards
Durgamahesh Manne
On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari wrote:
> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
> wrote:
> > Logical dump of that table is taking more than 7 hours to be completed
> >
> > I need to reduce to dump time of that table that has 88GB in size
>
On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari wrote:
> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
> wrote:
> > Is there any way to reduce dump time when i take dump of the table
> which has 148gb in size without creating partition* on that table has 148gb
> in siz
On Wed, Oct 16, 2019 at 3:22 PM Durgamahesh Manne
wrote:
>
>
> On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari wrote:
>
>> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>> wrote:
>> > Is there any way to reduce dump time when i take dump of the table
&
Hi
To the respected PostgreSQL international team
Please let me know that automatic table partitioning is possible in pgsql
12 or not without using trigger function
Regards
Durgamahesh Manne
Hi
To the respected PostgreSQL international team
Please let me know that automatic table partitioning without using trigger
function is possible in pgsql 12 or not ?
Regards
Durgamahesh Manne
On Fri, Jan 17, 2020 at 7:43 PM Stephen Frost wrote:
> Greetings,
>
> * Durgamahesh Manne (maheshpostgr...@gmail.com) wrote:
> > Please let me know that automatic table partitioning is possible in pgsql
> > 12 or not without using trigger function
>
> The approach I
bash
# simple shmsetup script
page_size=`getconf PAGE_SIZE`
phys_pages=`getconf _PHYS_PAGES`
shmall=`expr $phys_pages / 2`
shmmax=`expr $shmall \* $page_size`
echo kernel.shmmax = $shmmax
echo kernel.shmall = $shmall
Regards
Durgamahesh Manne
json","authorization":"Basic
Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json,
text/json, text/x-json, text/javascript, application/xml, text/xml"
NOTE:i have created pg_trgm based ginindex on this vch_message of slp01
table but it occupied more disk space hence i deleted trgm based gin index
please help in creating gin index on vch_message column of slp01 table
Regards
Durgamahesh Manne
On Sat, May 23, 2020 at 6:50 PM Andreas Kretschmer
wrote:
>
>
> Am 23.05.20 um 12:37 schrieb Durgamahesh Manne:
> > Hi
> >
> > Respected to PGDG GLOBAL TEAM
> >
> > I am getting this error( ERROR: data type character varying has no
> > default op
insert the data into foerign table
1) Is there any way to run insert delete update queries on foreign tables ?
Regards
durgamahesh manne
On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule
wrote:
> Hi
>
> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
> maheshpostgr...@gmail.com> napsal:
>
>> Hi
>>
>> Respected community members
>>
>> I have configured tds_fdw on postgres
On Wed, Nov 28, 2018 at 6:31 PM Durgamahesh Manne
wrote:
>
>
> On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule
> wrote:
>
>> Hi
>>
>> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
>> maheshpostgr...@gmail.com> napsal:
>>
>&g
Number"))=ltrim(rtrim(TFA.client_account_key))
where AC."iInsightAccountID" is null;
query is being executed for long time even after i have created required
indexes on columns of the tables
please help for fast query execution
Regards
durgamahesh manne
On Mon, Jan 28, 2019 at 6:34 PM Ron wrote:
> On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > below query is being executed for long time
> >
> > Select
> > distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
> > ltrim(rtrim(TFA.client_
On Mon, Jan 28, 2019 at 8:41 PM Adrian Klaver
wrote:
> On 1/28/19 5:04 AM, Ron wrote:
> > On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> >> Hi
> >>
> >> below query is being executed for long time
> >>
> >> Select
> >> distinc
Hi
Respected pgsql team
please let me know the pgaudit parameter to store pgaudit log files only
i don't want to store pgaudit log files at pgsql log_directory file
location
Regards
durgamahesh manne
On Fri, Mar 29, 2019 at 8:58 PM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
> On 29/3/19 5:15 μ.μ., Durgamahesh Manne wrote:
>
> Hi
> Respected pgsql team
>
> please let me know the pgaudit parameter to store pgaudit log files only
>
> i don't
On Saturday, March 30, 2019, David Steele wrote:
> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility
On Saturday, March 30, 2019, David Steele wrote:
> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility
Hi
Respected postgres team
Please let me know open source application interface to monitor the pgaudit
log files only as I have installed pgaudit tool
Regards
Durgamahesh Manne
hi
Respected international pgsql team
pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non superuser(ravi)
pershing=> insert into hyd (id,name) values('2','delhi');
INSERT 0 1
here data inserted
pershing=# grant UPDATE on public.hyd to ravi;
GRANT
i h
On Thu, Apr 4, 2019 at 3:55 PM Ron wrote:
> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
> > hi
> > Respected international pgsql team
> >
> > pershing=# grant INSERT on public.hyd to ravi;
> > GRANT
> > i have granted insert command access to non superus
On Thu, Apr 4, 2019 at 4:14 PM Durgamahesh Manne
wrote:
>
>
>
> On Thu, Apr 4, 2019 at 3:55 PM Ron wrote:
>
>> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
>> > hi
>> > Respected international pgsql team
>> >
>> > pershing=# grant INS
*From:* Durgamahesh Manne
*Sent:* Thursday, April 4, 2019 12:07 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* dbuser acess privileges
hi
Respected international pgsql team
pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non
configured tailnmail in server
how to get dbobject errors for specific user(ravi) from postgreslog
suppose username is ravi
INCLUDE: ravi ERROR:
is this correct approach to get errors related specific db user (ravi) from
pglog?
Regards
Durgamahesh Manne
Hi
Postgres supports only upto microseconds (6 decimal precision).
How do we generate timestamp with nanoseconds as rds postgres not
supported timestamp9 extension ?
Is there a way to generate timestamp with nanoseconds precision on
pg_partman with epoch without typecasting or with typecasting ?
Hi Respected Team
By default proc() does not detach tables concurrently. How do we implement
tables detach concurrently without blocking running sessions in prod.
why this is very critical to implement for pg_partman.
if this is not available yet on 5.1.0 then when can i expect to get it
if alread
Hi
Do new inserts block while performing vacuum freeze operations ?
when autovacuum runs , it will freeze the transaction ID (TXID) of the
table it's working on. This means that any transactions that started before
autovacuum began will be allowed to complete.but new transactions will be
blocked
Hi David.
Excellent response from you .Great
Regards,
Durga Mahesh
On Thu, Jul 18, 2024 at 11:28 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wednesday, July 17, 2024, Durgamahesh Manne
> wrote:
>
>>
>> Could you please provide more clarity on t
Hi Respected Team
with pg_partman By default proc() does not detach tables concurrently. How
do we implement tables detach concurrently without blocking other sessions
Here queries not using date column to detach tables with
run_maintenance_proc() which is not using concurrently based on the
rete
On Fri, Jul 19, 2024 at 7:55 PM Christoph Berg wrote:
> Re: Durgamahesh Manne
> > with pg_partman By default proc() does not detach tables concurrently.
> How
> > do we implement tables detach concurrently without blocking other
> sessions
> > Here queries not using d
Hi
Respected Team
I know the use case of implementing the partitions with publication and
subscription of built-in logical replication
CREATE PUBLICATION dbz_publication FOR TABLE betplacement.bet WITH
(publish_via_partition_root = true); This will use parent table to replica
data changes to targ
Hi
Respected Team
I have tried to setup pg_repack job with pg_cron but i could not implement
it properly
Is there any way to schedule pg_repack job with pg_cron ?
If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)
your
Hi
Respected Team
Is there any way to schedule a pg_repack job with pg_cron within the
instance ?
If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)
your response is highly valuable
Regards.
Durga Mahesh
Hi
Lock:extend (version 14.11)
How to resolve lock:extend issue even there is surge in concurrent sessions
(insert and update) on same table
Reduction on concurrent sessions would be solved but that is not a solution
and
there is no network bandwidth issue
Is there any parameter to tune to mini
Hi Respected Team,
Could you please let me know that how this freeze parameters work
Update query runs on table through which data being modified daily in this
case
Total records in table is about 20lakhs
current setting for this table is
Access method: heap
if it reaches > 0.1*200+1000 = 2,
use more frequent freezing, which can increase the overhead of
> autovacuum.
> *Recommendation = 0 -> 5000*
>
> Thanks, Semab
>
> On Sun, Aug 11, 2024 at 11:12 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi Respected Team,
>>
>> Co
Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
*8vcpus and 32gb ram
Number of calls per sec 1600 at this time 42% of cpu utilized
Max in ms 33.62 per call
Avg in ms
Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
*8vcpus and 32gb ram
Number of calls per sec 1600 at this time 42% of cpu utilized
Max in ms 33.62 per call
Avg in ms
Hi Greg
Great response from you this worked
Regards
Durga Mahesh
On Wed, Sep 11, 2024 at 7:12 PM Greg Sabino Mullane
wrote:
> On Wed, Sep 11, 2024 at 1:02 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>> createdat | timestamp with time
ke
>work_mem = '16MB'
>shared_buffers = '8GB'
>effective_cache_size = '24GB'
>
>
> On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>> insert into
>> dictionary(lang,
Hi Team
How do we improve the debezium performance?
Recommendations at kafka configuration side
Agenda is to minimize the lag during moderate or high work load on db
default > poll.interval.ms = 500ms
Recommended value for balanced performance > 1000ms
Recommended value for high throughput > 1000
Hi Respected Team
Here source side tables are created with 7 days partition interval here we
have data within them
changelog_event_p20240830
changelog_event_p20240906
Target side tables are created with 3 days partition interval
Structure of tables at both side is same
Would it be possible to r
Hi pgdg team
How to generate snapshot_name for required slot on latest versions of
postgresql
Below is the generated slot and snapshot info on postgres 10
osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput; slot_name
| consistent_point | snapshot_name | output_plugin
-+--
Hi,
How do we generate the snapshot for the logical slot which can be used in
pg_dump command options with --snapshot in pgsql 14?
Any function to check the snapshot of the slot after creating slot with
pg_create_logical_replication_slot('pgdg',pgoutput);
Regards,
Durga Mahesh
Hi Respected Team
I need to find foreign tables used in function and sproc and view
How to find all foreign tables being used by sproc,view,function
Thanks & Regards
Durgamahesh Manne
the info i need
Thanks & Regards
Durgamahesh Manne
On Mon, 14 Oct, 2024, 23:29 Wong, Kam Fook (TR Technology), <
kamfook.w...@thomsonreuters.com> wrote:
> I am trying to copy a table (Postgres) that is close to 1 billion rows
> into a Partition table (Postgres) within the same DB. What is the fastest
> way to copy the data? This table has 37 co
On Fri, 11 Oct, 2024, 23:33 Durgamahesh Manne,
wrote:
>
>
> On Fri, Oct 11, 2024 at 9:57 PM Greg Sabino Mullane
> wrote:
>
>> On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>> composite key (placedon
On Tue, 15 Oct, 2024, 15:15 David Rowley, wrote:
> On Sat, 12 Oct 2024 at 02:28, Durgamahesh Manne
> wrote:
> > Second column of composite index not in use effectively with index scan
> when using second column at where clause
> >
> > I have composite index on (pl
On Fri, Oct 11, 2024 at 6:18 PM Greg Sabino Mullane
wrote:
> (please start a new thread in the future rather than replying to an
> existing one)
>
> You cannot query on b and use an index on (a,b) as you observed. However,
> you can have two indexes:
>
> index1(a)
> index2(b)
>
> Postgres will be
On Fri, Oct 11, 2024 at 5:00 PM Greg Sabino Mullane
wrote:
> if we have any column with large string/text values and we want it to be
>> indexed then there is no choice but to go for a hash index. Please correct
>> me if I'm wrong.
>>
>
> There are other strategies / solutions, but we would need
On Fri, Oct 11, 2024 at 9:57 PM Greg Sabino Mullane
wrote:
> On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> composite key (placedon,id)
>> In concurrent mode if i use id at where clause then query plan for that
>&g
Hi
DEBUG: Poll returned: 1
LOG: Command finished in worker 1: CREATE UNIQUE INDEX index_3199790649 ON
repack.table_5832724 USING btree (id)
DEBUG: swap
DEBUG: query failed: ERROR: canceling statement due to statement timeout
DETAIL: query was: LOCK TABLE offer.market IN ACCESS EXCLUSIVE
-- Forwarded message -
From: Durgamahesh Manne
Date: Mon, Oct 7, 2024 at 10:01 AM
Subject: Inefficient use of index scan on 2nd column of composite index
during concurrent activity
To:
Hi team
Second column of composite index not in use effectively with index scan
when using
Hi
Can we replicate 16 to 14 using builtin logical similarly pglogical?
Regards
Durga Mahesh
| consistent_point | snapshot_name | output_plugin
-+--+-+---
lsr_sync_01 | 0/C000110 | 0003-0002-1 | pgoutput
Regards,
Durga Mahesh
On Fri, 20 Sept, 2024, 01:27 Durgamahesh Manne,
wrote:
> Hi Team
>
> --snapshot=*snapshotname*
to implement the same
Thanks for your valuable information
Regards,
Durga Mahesh
On Sat, 28 Sept, 2024, 23:10 Justin, wrote:
>
>
> On Sat, Sep 28, 2024 at 1:04 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi Team
>>
>> Can anyone res
Hi Team
Can anyone respond to my question from respected team members ?
Durga Mahesh
On Thu, Sep 26, 2024 at 2:23 AM Durgamahesh Manne
wrote:
> Hi Team
>
> --snapshot=snapshotname
> (Use the specified synchronized snapshot when making a dump of the database
>
> This opt
s & Regards
>
>
> *Muhammad Affan (*아판*)*
>
> *PostgreSQL Technical Support Engineer** / Pakistan R&D*
>
> Interlace Plaza 4th floor Twinhub office 32 I8 Markaz, Islamabad, Pakistan
>
> On Sat, Jul 20, 2024 at 12:00 PM Durgamahesh Manne <
> maheshpostgr...@g
Hi Team
test=> CREATE TABLE public.bet (
betid int4 NOT NULL,
externalbetid text NULL,
externalsystem text NULL,
placedon timestamptz NULL,
createdon timestamptz NULL
) partition by list (placedon) ;
CREATE TABLE
test=> alter table public.bet add primary key
Hi Team
--snapshot=*snapshotname*
(Use the specified synchronized snapshot when making a dump of the database
This option is useful when needing to synchronize the dump with a logical
replication slot) as per the pgdg
How do we synchronize the dump with a logical replication slot
with --snapsho
Hi
I have 32vcpus and 128GB ram and 13 slots only created but need 18 more
logical slots needed in this case
mwp > 32
max replication slots > 40 allocated
logical rw > 18
wal senders > 55
mpw > 6
avmw > 3
here 18+6+3 = 27 + 13 used slots = 40
So here howmany logical slots we can create upto bas
Hi
Howmany subscriptions that we can create
maximum for instance that has 32vcpu and 128gb for managing logical
replication
Highwork load on 4dbs and rest of 30dbs normal workload
Regards
Durga Mahesh
-- Forwarded message -
From: Sri Mrudula Attili
Date: Wed, 15 Jan, 2025, 17:12
Subject: Re: Postgresql database terminates abruptly with too many open
files error
To: Tom Lane
Cc:
Hello Tom,
The max_connections =200 and max_files_per_process =1000 as you mentioned.
So should
On Tue, Jan 21, 2025 at 9:24 PM Adrian Klaver
wrote:
> On 1/21/25 04:08, Durgamahesh Manne wrote:
> > Hi Team,
> >
> > I have publication and subscription servers .So seems data replication
> > running with minimal lag but records count mismatch with more than 10
&
On Wed, 22 Jan, 2025, 03:11 Adrian Klaver,
wrote:
> On 1/21/25 11:40, Durgamahesh Manne wrote:
> >
> >
> > On Wed, 22 Jan, 2025, 00:22 Adrian Klaver, > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> >
> >
> > On 1/21/25 10:06
On Thu, Jan 23, 2025 at 11:24 PM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:
>
>
> On Thu, Jan 23, 2025 at 10:08 PM Adrian Klaver
> wrote:
>
>> On 1/22/25 18:53, Durgamahesh Manne wrote:
>> >
>> >
>> >
>>
>> >
On Thu, Jan 23, 2025 at 10:08 PM Adrian Klaver
wrote:
> On 1/22/25 18:53, Durgamahesh Manne wrote:
> >
> >
> >
>
> > > But records count varies with difference of more than 10 thousand
> >
> > Have you looked at the I/0 statistics between the Postg
1 - 100 of 113 matches
Mail list logo