On Tue, Jul 15, 2025 at 6:14 PM Laurenz Albe
wrote:
> On Tue, 2025-07-15 at 15:40 +0530, Durgamahesh Manne wrote:
> > We are facing issues with slow running query
> >SELECT betid, versionid, betdata, processed, messagetime, createdat,
> updatedat
> >FROM praerm
Hi Team,
We are facing issues with slow running query
SELECT betid, versionid, betdata, processed, messagetime, createdat,
updatedat FROM praermabetdata where processed = 'false' ORDER BY betid,
versionid LIMIT 200 OFFSET 0 FOR UPDATE;
Q
On Fri, Jun 6, 2025 at 7:31 PM Ron Johnson wrote:
> On Fri, Jun 6, 2025 at 4:36 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi Team
>>
>> Can we generate a fill factor for tables that have delete ops ?
>>
>> Does the fill factor r
On Fri, Jun 6, 2025 at 7:29 PM Ron Johnson wrote:
> On Fri, Jun 6, 2025 at 8:57 AM Laurenz Albe
> wrote:
>
>> On Fri, 2025-06-06 at 14:10 +0530, Durgamahesh Manne wrote:
>> > Can we generate a fill factor for tables that have delete ops ?
>> >
>> > Do
Hi Team
Can we generate a fill factor for tables that have delete ops ?
Does the fill factor really work and help to minimize the bloat for tables
that have delete ops?
I have parent table with weekly partitions So for every week 50 to 60 gb of
bloat generates and autovacuum params already in p
On Fri, 14 Mar, 2025, 09:11 Ron Johnson, wrote:
> On Thu, Mar 13, 2025 at 11:25 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> On Fri, Mar 14, 2025 at 8:19 AM Ron Johnson
>> wrote:
>>
>>> On Thu, Mar 13, 2025 at 10:16 PM Durgamahesh Ma
On Fri, Mar 14, 2025 at 8:19 AM Ron Johnson wrote:
> On Thu, Mar 13, 2025 at 10:16 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
> [snip]
>
>> Hi Adrian Klaver
>>
>> 1) Postgres version.
>> select version();
>>
On Fri, 14 Mar, 2025, 08:04 Rob Sargent, wrote:
>
>
>
> 3) Output of EXPLAIN ANALYZE of query.
>
> Result (cost=2.80..2.83 rows=1 width=1) (actual time=0.030..0.030 rows=1
> loops=1)
>InitPlan 1 (returns $0)
> -> Index Only Scan using idx_cachekeys on cachekeys
> (cost=0.55..2.80 row
On Fri, Mar 14, 2025 at 12:47 AM Adrian Klaver
wrote:
> On 3/13/25 12:12, Durgamahesh Manne wrote:
> > Hi Team
> >
> > This query takes more time than usual for execution
>
> Define usual.
>
>
> > How to optimize it in best possible way
>
> Can
Hi Team
This query takes more time than usual for execution
How to optimize it in best possible way
Columns used in this query >> composite index eventhough not running
optimally
SELECT EXISTS (SELECT Key FROM CACHEKEYS WHERE CacheType = $1 AND TrsId =
$2 AND BrandId = $3 AND SportId = $4 AND
On Thu, Jan 23, 2025 at 11:24 PM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:
>
>
> On Thu, Jan 23, 2025 at 10:08 PM Adrian Klaver
> wrote:
>
>> On 1/22/25 18:53, Durgamahesh Manne wrote:
>> >
>> >
>> >
>>
>> >
On Thu, Jan 23, 2025 at 10:08 PM Adrian Klaver
wrote:
> On 1/22/25 18:53, Durgamahesh Manne wrote:
> >
> >
> >
>
> > > But records count varies with difference of more than 10 thousand
> >
> > Have you looked at the I/0 statistics between the Postg
On Wed, 22 Jan, 2025, 03:11 Adrian Klaver,
wrote:
> On 1/21/25 11:40, Durgamahesh Manne wrote:
> >
> >
> > On Wed, 22 Jan, 2025, 00:22 Adrian Klaver, > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> >
> >
> > On 1/21/25 10:06
On Wed, 22 Jan, 2025, 00:22 Adrian Klaver,
wrote:
>
>
> On 1/21/25 10:06 AM, Durgamahesh Manne wrote:
>
> >
> > Hi Adrian Klaver
> >
> > 22,906,216 bytes/10,846 rows works out to 2112 bytes per row.
> >
> > Is that a reasonable per row estimate?
On Tue, Jan 21, 2025 at 11:26 PM Adrian Klaver
wrote:
> On 1/21/25 09:38, Durgamahesh Manne wrote:
> >
> >
>
> >
> > Hi Adrian Klaver
> >
> > Really Thanks for your quick response
> >
> > This happened during repack lag went to more than 3
On Tue, Jan 21, 2025 at 9:24 PM Adrian Klaver
wrote:
> On 1/21/25 04:08, Durgamahesh Manne wrote:
> > Hi Team,
> >
> > I have publication and subscription servers .So seems data replication
> > running with minimal lag but records count mismatch with more than 10
&
Hi Team,
I have publication and subscription servers .So seems data replication
running with minimal lag but records count mismatch with more than 10
thousand records between source and destination tables
Could you please help in resolving this issue?
Regards,
Durga Mahesh
-- Forwarded message -
From: Sri Mrudula Attili
Date: Wed, 15 Jan, 2025, 17:12
Subject: Re: Postgresql database terminates abruptly with too many open
files error
To: Tom Lane
Cc:
Hello Tom,
The max_connections =200 and max_files_per_process =1000 as you mentioned.
So should
Hi
Howmany subscriptions that we can create
maximum for instance that has 32vcpu and 128gb for managing logical
replication
Highwork load on 4dbs and rest of 30dbs normal workload
Regards
Durga Mahesh
Hi
I have 32vcpus and 128GB ram and 13 slots only created but need 18 more
logical slots needed in this case
mwp > 32
max replication slots > 40 allocated
logical rw > 18
wal senders > 55
mpw > 6
avmw > 3
here 18+6+3 = 27 + 13 used slots = 40
So here howmany logical slots we can create upto bas
Hi
Can we replicate 16 to 14 using builtin logical similarly pglogical?
Regards
Durga Mahesh
Hi
DEBUG: Poll returned: 1
LOG: Command finished in worker 1: CREATE UNIQUE INDEX index_3199790649 ON
repack.table_5832724 USING btree (id)
DEBUG: swap
DEBUG: query failed: ERROR: canceling statement due to statement timeout
DETAIL: query was: LOCK TABLE offer.market IN ACCESS EXCLUSIVE
On Tue, 15 Oct, 2024, 15:15 David Rowley, wrote:
> On Sat, 12 Oct 2024 at 02:28, Durgamahesh Manne
> wrote:
> > Second column of composite index not in use effectively with index scan
> when using second column at where clause
> >
> > I have composite index on (pl
On Fri, 11 Oct, 2024, 23:33 Durgamahesh Manne,
wrote:
>
>
> On Fri, Oct 11, 2024 at 9:57 PM Greg Sabino Mullane
> wrote:
>
>> On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>> composite key (placedon
On Mon, 14 Oct, 2024, 23:29 Wong, Kam Fook (TR Technology), <
kamfook.w...@thomsonreuters.com> wrote:
> I am trying to copy a table (Postgres) that is close to 1 billion rows
> into a Partition table (Postgres) within the same DB. What is the fastest
> way to copy the data? This table has 37 co
On Fri, Oct 11, 2024 at 9:57 PM Greg Sabino Mullane
wrote:
> On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> composite key (placedon,id)
>> In concurrent mode if i use id at where clause then query plan for that
>&g
On Fri, Oct 11, 2024 at 6:18 PM Greg Sabino Mullane
wrote:
> (please start a new thread in the future rather than replying to an
> existing one)
>
> You cannot query on b and use an index on (a,b) as you observed. However,
> you can have two indexes:
>
> index1(a)
> index2(b)
>
> Postgres will be
-- Forwarded message -
From: Durgamahesh Manne
Date: Mon, Oct 7, 2024 at 10:01 AM
Subject: Inefficient use of index scan on 2nd column of composite index
during concurrent activity
To:
Hi team
Second column of composite index not in use effectively with index scan
when using
On Fri, Oct 11, 2024 at 5:00 PM Greg Sabino Mullane
wrote:
> if we have any column with large string/text values and we want it to be
>> indexed then there is no choice but to go for a hash index. Please correct
>> me if I'm wrong.
>>
>
> There are other strategies / solutions, but we would need
to implement the same
Thanks for your valuable information
Regards,
Durga Mahesh
On Sat, 28 Sept, 2024, 23:10 Justin, wrote:
>
>
> On Sat, Sep 28, 2024 at 1:04 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi Team
>>
>> Can anyone res
s & Regards
>
>
> *Muhammad Affan (*아판*)*
>
> *PostgreSQL Technical Support Engineer** / Pakistan R&D*
>
> Interlace Plaza 4th floor Twinhub office 32 I8 Markaz, Islamabad, Pakistan
>
> On Sat, Jul 20, 2024 at 12:00 PM Durgamahesh Manne <
> maheshpostgr...@g
Hi Team
Can anyone respond to my question from respected team members ?
Durga Mahesh
On Thu, Sep 26, 2024 at 2:23 AM Durgamahesh Manne
wrote:
> Hi Team
>
> --snapshot=snapshotname
> (Use the specified synchronized snapshot when making a dump of the database
>
> This opt
Hi Team
test=> CREATE TABLE public.bet (
betid int4 NOT NULL,
externalbetid text NULL,
externalsystem text NULL,
placedon timestamptz NULL,
createdon timestamptz NULL
) partition by list (placedon) ;
CREATE TABLE
test=> alter table public.bet add primary key
| consistent_point | snapshot_name | output_plugin
-+--+-+---
lsr_sync_01 | 0/C000110 | 0003-0002-1 | pgoutput
Regards,
Durga Mahesh
On Fri, 20 Sept, 2024, 01:27 Durgamahesh Manne,
wrote:
> Hi Team
>
> --snapshot=*snapshotname*
Hi Team
--snapshot=*snapshotname*
(Use the specified synchronized snapshot when making a dump of the database
This option is useful when needing to synchronize the dump with a logical
replication slot) as per the pgdg
How do we synchronize the dump with a logical replication slot
with --snapsho
Hi,
How do we generate the snapshot for the logical slot which can be used in
pg_dump command options with --snapshot in pgsql 14?
Any function to check the snapshot of the slot after creating slot with
pg_create_logical_replication_slot('pgdg',pgoutput);
Regards,
Durga Mahesh
Hi pgdg team
How to generate snapshot_name for required slot on latest versions of
postgresql
Below is the generated slot and snapshot info on postgres 10
osdb_lsr=# CREATE_REPLICATION_SLOT lsr_sync_01 LOGICAL pgoutput; slot_name
| consistent_point | snapshot_name | output_plugin
-+--
Hi Respected Team
Here source side tables are created with 7 days partition interval here we
have data within them
changelog_event_p20240830
changelog_event_p20240906
Target side tables are created with 3 days partition interval
Structure of tables at both side is same
Would it be possible to r
Hi Team
How do we improve the debezium performance?
Recommendations at kafka configuration side
Agenda is to minimize the lag during moderate or high work load on db
default > poll.interval.ms = 500ms
Recommended value for balanced performance > 1000ms
Recommended value for high throughput > 1000
ke
>work_mem = '16MB'
>shared_buffers = '8GB'
>effective_cache_size = '24GB'
>
>
> On Wed, 11 Sept 2024 at 13:50, Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>> insert into
>> dictionary(lang,
Hi Greg
Great response from you this worked
Regards
Durga Mahesh
On Wed, Sep 11, 2024 at 7:12 PM Greg Sabino Mullane
wrote:
> On Wed, Sep 11, 2024 at 1:02 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi
>> createdat | timestamp with time
Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
*8vcpus and 32gb ram
Number of calls per sec 1600 at this time 42% of cpu utilized
Max in ms 33.62 per call
Avg in ms
Hi
insert into
dictionary(lang,tid,sportid,brandid,translatedtext,objecttype,basetid)
values ($1,$2,$3,$4,$5,$6,$7) on conflict do nothing
*8vcpus and 32gb ram
Number of calls per sec 1600 at this time 42% of cpu utilized
Max in ms 33.62 per call
Avg in ms
use more frequent freezing, which can increase the overhead of
> autovacuum.
> *Recommendation = 0 -> 5000*
>
> Thanks, Semab
>
> On Sun, Aug 11, 2024 at 11:12 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi Respected Team,
>>
>> Co
Hi Respected Team,
Could you please let me know that how this freeze parameters work
Update query runs on table through which data being modified daily in this
case
Total records in table is about 20lakhs
current setting for this table is
Access method: heap
if it reaches > 0.1*200+1000 = 2,
Hi
Lock:extend (version 14.11)
How to resolve lock:extend issue even there is surge in concurrent sessions
(insert and update) on same table
Reduction on concurrent sessions would be solved but that is not a solution
and
there is no network bandwidth issue
Is there any parameter to tune to mini
Hi
Respected Team
Is there any way to schedule a pg_repack job with pg_cron within the
instance ?
If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)
your response is highly valuable
Regards.
Durga Mahesh
Hi
Respected Team
I have tried to setup pg_repack job with pg_cron but i could not implement
it properly
Is there any way to schedule pg_repack job with pg_cron ?
If yes then please please let me know the best approach to schedule it with
pg_cron within the instance.( not in bastion host)
your
Hi
Respected Team
I know the use case of implementing the partitions with publication and
subscription of built-in logical replication
CREATE PUBLICATION dbz_publication FOR TABLE betplacement.bet WITH
(publish_via_partition_root = true); This will use parent table to replica
data changes to targ
On Fri, Jul 19, 2024 at 7:55 PM Christoph Berg wrote:
> Re: Durgamahesh Manne
> > with pg_partman By default proc() does not detach tables concurrently.
> How
> > do we implement tables detach concurrently without blocking other
> sessions
> > Here queries not using d
Hi Respected Team
with pg_partman By default proc() does not detach tables concurrently. How
do we implement tables detach concurrently without blocking other sessions
Here queries not using date column to detach tables with
run_maintenance_proc() which is not using concurrently based on the
rete
Hi David.
Excellent response from you .Great
Regards,
Durga Mahesh
On Thu, Jul 18, 2024 at 11:28 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wednesday, July 17, 2024, Durgamahesh Manne
> wrote:
>
>>
>> Could you please provide more clarity on t
Hi
Do new inserts block while performing vacuum freeze operations ?
when autovacuum runs , it will freeze the transaction ID (TXID) of the
table it's working on. This means that any transactions that started before
autovacuum began will be allowed to complete.but new transactions will be
blocked
Hi Respected Team
By default proc() does not detach tables concurrently. How do we implement
tables detach concurrently without blocking running sessions in prod.
why this is very critical to implement for pg_partman.
if this is not available yet on 5.1.0 then when can i expect to get it
if alread
Hi
Postgres supports only upto microseconds (6 decimal precision).
How do we generate timestamp with nanoseconds as rds postgres not
supported timestamp9 extension ?
Is there a way to generate timestamp with nanoseconds precision on
pg_partman with epoch without typecasting or with typecasting ?
the info i need
Thanks & Regards
Durgamahesh Manne
Hi Respected Team
I need to find foreign tables used in function and sproc and view
How to find all foreign tables being used by sproc,view,function
Thanks & Regards
Durgamahesh Manne
On Sat, May 23, 2020 at 6:50 PM Andreas Kretschmer
wrote:
>
>
> Am 23.05.20 um 12:37 schrieb Durgamahesh Manne:
> > Hi
> >
> > Respected to PGDG GLOBAL TEAM
> >
> > I am getting this error( ERROR: data type character varying has no
> > default op
json","authorization":"Basic
Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json,
text/json, text/x-json, text/javascript, application/xml, text/xml"
NOTE:i have created pg_trgm based ginindex on this vch_message of slp01
table but it occupied more disk space hence i deleted trgm based gin index
please help in creating gin index on vch_message column of slp01 table
Regards
Durgamahesh Manne
bash
# simple shmsetup script
page_size=`getconf PAGE_SIZE`
phys_pages=`getconf _PHYS_PAGES`
shmall=`expr $phys_pages / 2`
shmmax=`expr $shmall \* $page_size`
echo kernel.shmmax = $shmmax
echo kernel.shmall = $shmall
Regards
Durgamahesh Manne
On Fri, Jan 17, 2020 at 7:43 PM Stephen Frost wrote:
> Greetings,
>
> * Durgamahesh Manne (maheshpostgr...@gmail.com) wrote:
> > Please let me know that automatic table partitioning is possible in pgsql
> > 12 or not without using trigger function
>
> The approach I
Hi
To the respected PostgreSQL international team
Please let me know that automatic table partitioning without using trigger
function is possible in pgsql 12 or not ?
Regards
Durgamahesh Manne
Hi
To the respected PostgreSQL international team
Please let me know that automatic table partitioning is possible in pgsql
12 or not without using trigger function
Regards
Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:22 PM Durgamahesh Manne
wrote:
>
>
> On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari wrote:
>
>> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>> wrote:
>> > Is there any way to reduce dump time when i take dump of the table
&
On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari wrote:
> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
> wrote:
> > Is there any way to reduce dump time when i take dump of the table
> which has 148gb in size without creating partition* on that table has 148gb
> in siz
On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari wrote:
> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
> wrote:
> > Logical dump of that table is taking more than 7 hours to be completed
> >
> > I need to reduce to dump time of that table that has 88GB in size
>
is taking more than 7 hours to be completed
I need to reduce to dump time of that table that has 88GB in size
Regards
Durgamahesh Manne
configured tailnmail in server
how to get dbobject errors for specific user(ravi) from postgreslog
suppose username is ravi
INCLUDE: ravi ERROR:
is this correct approach to get errors related specific db user (ravi) from
pglog?
Regards
Durgamahesh Manne
*From:* Durgamahesh Manne
*Sent:* Thursday, April 4, 2019 12:07 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* dbuser acess privileges
hi
Respected international pgsql team
pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non
On Thu, Apr 4, 2019 at 4:14 PM Durgamahesh Manne
wrote:
>
>
>
> On Thu, Apr 4, 2019 at 3:55 PM Ron wrote:
>
>> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
>> > hi
>> > Respected international pgsql team
>> >
>> > pershing=# grant INS
On Thu, Apr 4, 2019 at 3:55 PM Ron wrote:
> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
> > hi
> > Respected international pgsql team
> >
> > pershing=# grant INSERT on public.hyd to ravi;
> > GRANT
> > i have granted insert command access to non superus
hi
Respected international pgsql team
pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non superuser(ravi)
pershing=> insert into hyd (id,name) values('2','delhi');
INSERT 0 1
here data inserted
pershing=# grant UPDATE on public.hyd to ravi;
GRANT
i h
Hi
Respected postgres team
Please let me know open source application interface to monitor the pgaudit
log files only as I have installed pgaudit tool
Regards
Durgamahesh Manne
On Saturday, March 30, 2019, David Steele wrote:
> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility
On Saturday, March 30, 2019, David Steele wrote:
> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility
On Fri, Mar 29, 2019 at 8:58 PM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
> On 29/3/19 5:15 μ.μ., Durgamahesh Manne wrote:
>
> Hi
> Respected pgsql team
>
> please let me know the pgaudit parameter to store pgaudit log files only
>
> i don't
Hi
Respected pgsql team
please let me know the pgaudit parameter to store pgaudit log files only
i don't want to store pgaudit log files at pgsql log_directory file
location
Regards
durgamahesh manne
On Mon, Jan 28, 2019 at 8:41 PM Adrian Klaver
wrote:
> On 1/28/19 5:04 AM, Ron wrote:
> > On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> >> Hi
> >>
> >> below query is being executed for long time
> >>
> >> Select
> >> distinc
On Mon, Jan 28, 2019 at 6:34 PM Ron wrote:
> On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > below query is being executed for long time
> >
> > Select
> > distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
> > ltrim(rtrim(TFA.client_
Number"))=ltrim(rtrim(TFA.client_account_key))
where AC."iInsightAccountID" is null;
query is being executed for long time even after i have created required
indexes on columns of the tables
please help for fast query execution
Regards
durgamahesh manne
On Wed, Nov 28, 2018 at 6:31 PM Durgamahesh Manne
wrote:
>
>
> On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule
> wrote:
>
>> Hi
>>
>> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
>> maheshpostgr...@gmail.com> napsal:
>>
>&g
On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule
wrote:
> Hi
>
> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
> maheshpostgr...@gmail.com> napsal:
>
>> Hi
>>
>> Respected community members
>>
>> I have configured tds_fdw on postgres
insert the data into foerign table
1) Is there any way to run insert delete update queries on foreign tables ?
Regards
durgamahesh manne
On Mon, Oct 15, 2018 at 9:52 PM Adrian Klaver
wrote:
> On 10/15/18 8:56 AM, Durgamahesh Manne wrote:
>
>
> > I request you all community members to provide built in bdr v3 version
> > replication for public as multimaster replication is on high priority
> > again
On Mon, Oct 15, 2018 at 9:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> So i need unlimited length data type for required column of the table for
>> sto
On Mon, Oct 15, 2018 at 7:54 PM Tom Lane wrote:
> Durgamahesh Manne writes:
> >>> If character varying is used without length specifier, the type
> >>> accepts strings of any size
> >>> but varchar does not accept more than this 10485760 value
>
> Yo
On Mon, Oct 15, 2018 at 3:11 PM Thomas Kellerer wrote:
> Durgamahesh Manne schrieb am 15.10.2018 um 11:18:
> > was there any specific reason that you have given max length for varchar
> is limited to 10485760 value?
> >
> > why you have not given max length for varcha
On Mon, Oct 15, 2018 at 2:42 PM Durgamahesh Manne
wrote:
>
>
> On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> w
On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne
wrote:
>
>
> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver
>> wrote:
>>
>&g
On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne
wrote:
>
>
> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver
> wrote:
>
>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>> > Hi
>> >
>> > please let me know the max length of varchar & text
On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver
wrote:
> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > please let me know the max length of varchar & text in postgres
>
> https://www.postgresql.org/docs/10/static/datatype-character.html
> >
&g
Hi
please let me know the max length of varchar & text in postgres
Regards
Durgamahesh Manne
ould be the
> > ones to ask about configuring.
>
> i said it already, BDR3 is not for public, only for our customers. You
> will need a own support contract.
>
>
> Durgamahesh Manne, please contact us, if you are interesst in BDR version
> 3.
>
>
> Regards, And
On Mon, Oct 1, 2018 at 7:34 PM Adrian Klaver
wrote:
> On 10/1/18 1:08 AM, Durgamahesh Manne wrote:
> >
> >
> > On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 9/28/18 8:
On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver
wrote:
> On 9/28/18 8:41 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > This is regarding bdr extension issue. I got below error at the time i
> > have tried to create the bdr extention
> >
> >
> > ERROR: c
Hi
This is regarding bdr extension issue. I got below error at the time i have
tried to create the bdr extention
ERROR: could not open extension control file
"opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file or
directory
Regards
Durgamahesh Manne
Thank you all very much for this information
On Sat, Sep 22, 2018 at 12:38 AM Alban Hertroys wrote:
>
>
> > On 21 Sep 2018, at 17:49, Durgamahesh Manne
> wrote:
> >
> >
>
> Considering how hard you try to get rid of duplicates, I'm quite convinced
&
On Fri, Sep 21, 2018 at 9:12 PM Andreas Kretschmer
wrote:
>
>
> Am 21.09.2018 um 17:13 schrieb Durgamahesh Manne:
> > query is below
>
> query and plan still not readable. Store it into a textfile and attach
> it here.
>
>
> Andreas
>
> --
> 2ndQuadra
On Fri, Sep 21, 2018 at 7:38 PM Durgamahesh Manne
wrote:
> Hi
>
> Complex query taken around 30 minutes to execute even i have
> increased work_mem value to 4GB temporarily as total ram is 16gb
>
> Explain analyze query taken around 30 minutes to execute even i have
> cr
Sort Key: j."vchFileName",
j."vchContractEntityRole", j."vchContractNumber",
j."vchContractEntityPersonalIdentifier"
|
| Sort Method: external merge Disk:
42758304kB
|
| -> Nested Loop
(cost=0.42..266305.78 rows=59959206 width=677) (actual
time=0.122..73786.837 rows=61595746 loops=1)
|
| -> Seq Scan on "table3" j
(cost=0.00..669.12 rows=25132 width=591) (actual time=0.021..28.338
rows=25132 loops=1)
|
| Filter: (NOT "bFetch")
|
| -> Index Scan using cpr_idx4
on table2 k (cost=0.42..6.92 rows=365 width=107) (actual time=0.838..2.244
rows=2451 loops=25132)
|
| Index Cond:
(("vchAgentTaxID")::text = (j.vchagenttaxid)::text)
|
| Planning time: 2.369 ms
|
| Execution time: 1807771.091 ms
So i need to execute below query at less time. please help in
optimising the complex query execution time
Regards
Durgamahesh Manne
1 - 100 of 113 matches
Mail list logo