On Thu, 26 Sept 2024 at 16:33, yudhi s wrote:
> Hello All,
>
> In a RDS postgres we are seeing some select queries when running and doing
> sorting on 50 million rows(as its having order by clause in it) , the
> significant portion of wait event is showing as "IO:BufFileWrite" and it
> runs for ~
On Tue, 17 Sept 2024 at 21:24, Adrian Klaver
wrote:
>
> Which means you need to on Flink end:
>
> 1) Use Flink async I/O .
>
> 2) Find a client that supports async or fake it by using multiple
> synchronous clients.
>
> On Postgres end there is this:
>
> https://www.postgresql.org/docs/current/wa
On Thu, 19 Sept, 2024, 8:40 pm Adrian Klaver,
wrote:
> On 9/19/24 05:24, Greg Sabino Mullane wrote:
> > On Thu, Sep 19, 2024 at 5:17 AM veem v
> > This is really difficult to diagnose from afar with only snippets of
> > logs and half-complete descriptions of your
On Thu, 19 Sept 2024 at 03:02, Adrian Klaver
wrote:
>
>
> This needs clarification.
>
> 1) To be clear when you refer to parent and child that is:
> FK
> parent_tbl.fld <--> child_tbl.fld_fk
>
> not parent and child tables in partitioning scheme?
>
> 2) What are the table schemas
On Thu, 19 Sept 2024 at 17:54, Greg Sabino Mullane
wrote:
> On Thu, Sep 19, 2024 at 5:17 AM veem v wrote:
>
>> 2024-09-18 17:05:56 UTC:100.72.10.66(54582):USER1@TRANDB:[14537]:DETAIL:
>> Process 14537 waits for ShareLock on transaction 220975629; blocked by
>> process 1
Hello,
It's postgres version 16.1, we want to convert an existing column data type
from integer to numeric and it's taking a long time. The size of the table
is ~50GB and the table has ~150million rows in it and it's not partitioned.
We tried running the direct alter and it's going beyond hours, so
On Thu, 9 Jan 2025 at 21:57, Ron Johnson wrote:
> On Thu, Jan 9, 2025 at 11:25 AM veem v wrote:
>
>> Hello,
>> It's postgres version 16.1, we want to convert an existing column data
>> type from integer to numeric and it's taking a long time. The size of the
&
Hi,
Its postgres database behind the scenes.
We have a use case in which the customer is planning to migrate data from
an older version (V1) to a newer version (V2). For V2, the tables will be
new, but their structure will be similar to the V1 version with few changes
in relationship might be ther
>
>
>
> This is what Sqitch(https://sqitch.org/) was designed for.
>
> The biggest issue is that the data will be incrementing while you do the
> structural changes. How you handle that is going to depend on the
> question raised by Peter J. Holzer:
> Is this being done in place on one Postgres in
Hi,
It's postgres 15+. And need guidance on JSON types, (for both on premise
vanilla postgres installation and AWS cloud based RDS/aurora postgres
installations).
I have never worked on a JSON data type in the past. But now in one of the
scenarios the team wants to use it and thus want to understa
On Tue, 15 Jul 2025 at 23:02, Merlin Moncure wrote:
> On Mon, Jul 14, 2025 at 2:01 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Mon, Jul 14, 2025 at 12:54 PM Adrian Klaver
>> wrote:
>>
>>> On 7/14/25 12:51, veem v wrote:
>&
On Sun, 20 Jul 2025 at 02:29, Adrian Klaver
wrote:
> On 7/19/25 13:39, veem v wrote:
> >
>
> I thought you are answered that with your tests above? At least for the
> Postgres end. As to the Snowflake end you will need to do comparable
> tests for fetching the data from Post
Hi,I am trying to set the fetch size for my ResultSet to avoid Out of Memory exception. I have created the Statement with ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY and ResultSet.HOLD_CURSORS_OVER_COMMIT and I've also disabled auto commit as mentioned in the link Getting results based
Hi,When I tried to update the flush LSN position of the logical replication slot for my 11.3 database, using the command select pg_replication_slot_advance(, )I get the error:user=cdcpsqlsrc,db=db_dsn_test03,app=PostgreSQL JDBC Driver,client=172.24.42.236 DEBUG: failed to increase restart lsn: prop
Hi
For my work with Postgres 11.5, I needed functionality that unlogged tables
are automatically dropped at the commit time, but I found that ON COMMIT
option is only supported with temporary table.
I would like to understand reasons why this option is limited to temporary
tables? Is there any
Hi,We have an application that uses the PostgreSQL logical replication API to read the changes made to the PostgreSQL database and applies it to a different database (like Db2 etc). We are using logical replication slots for this. Currently I am facing an issue where the replication slot is pointin
that didn't work. It still gives the WAL segment already removed error.
Could you please suggest a solution for this? Is there a way to set the
restart_lsn and flush_lsn of slot? Or is recreating the slot the only possible
solution?
Thanks,
Rashmi
-Andres Freund wrote: -
To: Rash
feedback mechanism
right?
Thanks,
Rashmi
-"Rashmi V Bharadwaj" wrote: -
To: Andres Freund
From: "Rashmi V Bharadwaj"
Date: 13/03/2019 10:59AM
Cc: pgsql-gene...@postgresql.org
Subject: Re: PostgreSQL logical replication slot LSN values
Hi,
> Well, did you consum
Hi,Is there a SQL query or a database parameter setting that I can use from an external application to determine if the PostgreSQL database is on cloud (like on Amazon RDS or IBM cloud) or on a non-cloud on-prem environment?Thanks,Rashmi
Hi,
I have a table with bytea column and trying to load the data using copy
command. But Copy command is failing with "ERROR: invalid byte sequence
for encoding UTF8: 0x00. Why postgresql is failing to load when data
contains 0x00. How to resolve this error ? any workaround to load the data
with
Hi,
I have a UTF8 database and simple table with two columns (integer and
varchar). Created a csv file with some multibyte characters and trying to
perform load operation using the copy command.
Database info:
Postgresql database details:
Name| Owner | Encoding | Collate
Its UTF-8. Also verified the load file and its utf-8.
Regards,
Kiran
On Fri, Jan 12, 2024 at 10:48 PM Adrian Klaver
wrote:
> On 1/12/24 07:23, Kiran K V wrote:
> > Hi,
> >
> >
> > I have a UTF8 database and simple table with two columns (integer and
> > varch
Hi,
I am currently using PostgreSQL version 16 and the test_decoding plugin to
perform logical replication (using replication slots). I have a simple
table with integer column and JSON column. When a non-JSON column is
updated, the value "unchanged-toast-datum" for the JSON column is obtained.
Thi
6-create-logical-replication-stream
>
> Regards,
> Kiran
>
>
>
>
> Thanks
>
> Dinesh
>
>
> --
> *From:* Kiran K V
> *Sent:* Friday, July 11, 2025 11:09 PM
> *To:* pgsql-gene...@postgresql.org
> *Subject:* Query regarding support of te
Hi,
I have a question regarding the new feature introduced in PostgreSQL 16
that enables logical replication from a standby server.
Currently, we are using a standalone instance of PostgreSQL 16 and
performing logical replication by creating replication slots and utilizing
the JDBC replication st
Hi,
I have a question regarding the new feature introduced in PostgreSQL 16
that enables logical replication from a standby server.
Currently, we are using a standalone instance of PostgreSQL 16 and
performing logical replication by creating replication slots and utilizing
the JDBC replication s
Hi Team,
We are migrating from Oracle 12c to Aurora Postgres 13 and running into
implicit casting issues.
Oracle is able to implicitly cast the bind value of prepared statements
executed from the application to appropriate type - String -> Number,
String -> Date, Number -> String etc. when there
implication on the Postgres server that we need
to worry about?
Thanks,
Karthik K L V
On Tue, Jul 19, 2022 at 12:12 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Monday, July 18, 2022, Karthik K L V
> wrote:
>
>> Hi Team,
>>
>> We are migrating f
Hi Team,
I am getting the below error while executing a Select query using Spring
DataJPA and Hibernate framework in Aurora Postgres SQL.
*Caused by: org.postgresql.util.PSQLException: ERROR: operator does not
exist: text = bytea Hint: No operator matches the given name and argument
types. You
Aurora PostgresSQL v13.3
On Wed, Jul 20, 2022 at 3:02 PM Karthik K L V
wrote:
> Hi Team,
>
> I am getting the below error while executing a Select query using Spring
> DataJPA and Hibernate framework in Aurora Postgres SQL.
>
>
>
> *Caused by: org.postgresql.util.PSQLException: ER
Engine to read *=
null* as *is null*.
On Wed, Jul 20, 2022 at 5:29 PM hubert depesz lubaczewski
wrote:
> On Wed, Jul 20, 2022 at 03:02:13PM +0530, Karthik K L V wrote:
> > *Caused by: org.postgresql.util.PSQLException: ERROR: operator does not
> > exist: text = bytea Hint: No opera
Hi Team,
We are migrating from Oracle 12C to Aurora Postgres 13 and running into
query failures when the bind value of a Text datatype resolves to null.
The same query works fine in Oracle without any issues. We use
SpringDataJPA and Hibernate framework to connect and execute queries and
the appl
which are
performed by me.
--
Regards,
Raghavendra Rao J S V
oast_2619_index"*
*CONTEXT: automatic vacuum of table "qovr.pg_toast.pg_toast_2619"*
--
Regards,
Raghavendra Rao J S V
Good morning.
Please suggest the best suited unit test frame work for postgresql database
and also shared the related documents to understand the framework.
--
Regards,
Raghavendra Rao J S V
Hi,
How to install pgTAP on Centos machine.? I tried to install but no luck.
Please guide me to proceed further.
--
Regards,
Raghavendra Rao J S V
eters?
- shared_buffers
- effective_cache_size
- work_mem
- maintenance_work_mem
- checkpoint_segments
- wal_keep_segments
- checkpoint_completion_target
- Max_prepared_transactions =0
--
Regards,
Raghavendra Rao J S V
*
- Which one should has higher value.
--
Regards,
Raghavendra Rao J S V
am planning to make '
autovacuum_vacuum_scale_factor'
value to zero and autovacuum_vacuum_threshold value to 150. Please
suggest me does it have any negative impact.
--
Regards,
Raghavendra Rao J S V
it" to zero or "
autovacuum_vacuum_scale_factor" to zero or both? Please clarify me.
Regards,
Raghavendra Rao
On Wed, Apr 11, 2018 at 12:59 PM, Laurenz Albe
wrote:
> Raghavendra Rao J S V wrote:
> > We are using postgres 9.2 version on Centos operating system. We
Thanks a lot.
On Wed 11 Apr, 2018, 9:07 AM Michael Paquier, wrote:
> On Tue, Apr 10, 2018 at 11:06:54PM +0530, Raghavendra Rao J S V wrote:
> > I am not clear the difference between checkpoint_segments and
> > wal_keep_segments .
> >
> > I would like to now below th
constraint
"pg_statistic_relid_att_inh_index"*
DETAIL: Key (starelid, staattnum, stainherit)=(18915, 6, f) already exists.
' pg_statistic ' is a meta data table. Is it ok if I remove one duplicated
record from ' pg_statistic' table?.
--
Regards,
Raghavendra Rao J S V
. Thanks in advance.
--
Regards,
Raghavendra Rao J S V
. Later we
are restarting the application server process.
How to avoid accumulating the dead tuples for those tables. Is there any
other approach to remove the dead tuple’s without vacuum full/down time.
Note:- We are using the postgres version 9.2
--
Regards,
Raghavendra Rao J S V
Mobile
Hi All,
We are using postgres *9.2* version on *Centos *operating system. We have
around *1300+* tables.We have following auto vacuum settings are enables.
Still few of the tables(84 tables) which are always busy are not
vacuumed.Dead tuples in those tables are more than 5000. Due to that
table
00*
*Kindly guide me your views. Does it cause any adverse effect on DB.*
Regards,
Raghavendra Rao
On 13 August 2018 at 18:05, Tomas Vondra
wrote:
>
>
> On 08/13/2018 11:07 AM, Raghavendra Rao J S V wrote:
>
>> Hi All,
>>
>> We are using postgres *9.2* versio
s Auto vacuum worker process will sleep like Auto vacuum launcher
process ?
What is the difference between Auto vacuum launcher process and Auto vacuum
worker process?
--
Regards,
Raghavendra Rao J S V
?
Regards,
Raghavendra Rao
On 17 August 2018 at 09:30, Joshua D. Drake wrote:
> On 08/16/2018 06:10 PM, Raghavendra Rao J S V wrote:
>
> Hi All,
>
> I have gone through several documents but I am still have confusion
> related to "autovacuum_naptime" and "auto
lot of raise notice messages , I would like to create a new log file
instead of appending the logs to existing one. How to achieve this?
Like this each and every execution of my function, I wold like to get a new
log file. How to do this.
--
Regards,
Raghavendra Rao J S V
Thanks a lot.
On Sun 19 Aug, 2018, 11:09 PM Adrian Klaver,
wrote:
> On 08/19/2018 10:22 AM, Raghavendra Rao J S V wrote:
> > Hi All,
> >
> > I have a log file as "
> > */opt/postgres/9.2/data/pg_log/postgresql-2018-08-19.csv*". Due to
> > "*log
that table in
postgresql database.
--
Regards,
Raghavendra Rao J S V
suggest me the steps / provide me a URL to implement?
--
Regards,
Raghavendra Rao J S V
suggest me the steps / provide me a URL to implement?
--
Regards,
Raghavendra Rao J S V
"idx_tab_col6date" btree (col6date)
"idx_tab_rid" btree (rid)
"idx_tab_rtype_id" btree (rtypid)
"idx_tab_tkey" btree (tkey)
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
:
> On 08/25/2018 08:36 PM, Raghavendra Rao J S V wrote:
> > Hi All,
> >
> > One of our database size is 50gb. Out of it one of the table has
> > 149444622 records. Size of that table is 14GB and its indexes size is
> 16GB.
> > Total size of the table and its in
budget.
>
> On Sun, Aug 26, 2018, 12:37 AM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Thank you very much for your prompt response.
>>
>> Please guide me below things.
>>
>> How to check rows got corrupted?
>>
>>
Hi All,
We are using below command to take the backup of the database.
*$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=1 --pgdata=- -D /opt/rao *
While taking the backup we have received below error.
transaction log start point: 285/8F80
lp me.
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
?
--
Regards,
Raghavendra Rao J S V
Hi All,
Which is the most stable PostgreSQL version yet present for CentOS 7?
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
,
Raghavendra Rao J S V
Mobile- 8861161425
Thanks for the prompt response.
On Fri 28 Sep, 2018, 10:55 AM Michael Paquier, wrote:
> On Fri, Sep 28, 2018 at 10:33:30AM +0530, Raghavendra Rao J S V wrote:
> > Log file will be generated in *csv* format at *pg_log* directory in our
> > PostgreSQL. Every day we are getting o
chael Paquier wrote:
> On Fri, Sep 28, 2018 at 06:19:16AM -0700, Adrian Klaver wrote:
> > If log_truncate_on_rotation = 'on', correct?
>
> Yup, thanks for precising.
> --
> Michael
>
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
file:Success*
Please help me to resolve the above error.
--
Regards,
Raghavendra Rao J S V
On Fri, 5 Oct 2018 at 07:06, Thomas Munro
wrote:
> On Fri, Oct 5, 2018 at 4:29 AM Raghavendra Rao J S V
> wrote:
> > PANIC: could not read from control file:Success
>
> That means that the pg_control file is the wrong size. What size is
> it? What filesystem is this, t
,,""
2018-10-08 05:27:44.517 UTC,,,27686,,5bbaead0.6c26,2,,2018-10-08 05:27:44
UTC,,0,LOG,0,"aborting startup due to startup process
failure",""
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
t_metadata
--
Regards,
Raghavendra Rao J S V
Hi All,
pg_dump is taking more time. Please let me know which configuration setting
we need to modify to speedup the pg_dump backup.We are using 9.2 version on
Centos Box.
--
Regards,
Raghavendra Rao J S V
t; Regards,
> Pavan
>
> On Thu, Oct 11, 2018, 8:02 AM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Hi All,
>>
>> pg_dump is taking more time. Please let me know which configuration
>> setting we need to modify to speedup the pg_dum
Thank you very much for your prompt response Christopher.
On Thu 11 Oct, 2018, 8:41 AM Christopher Browne, wrote:
> On Wed, Oct 10, 2018, 10:32 PM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Hi All,
>>
>> pg_dump is taking more time. Pleas
-
> *De:* Johnes Castro
> *Enviado:* quarta-feira, 5 de setembro de 2018 15:48
> *Para:* Raghavendra Rao J S V; pgsql-general@lists.postgresql.org
> *Assunto:* RE: Max number of WAL files in pg_xlog directory for Postgres
> 9.2 version
>
> Hi,
>
> This page in the d
pletion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
*#wal_keep_segments = 0* # in logfile segments, 16MB each; 0 disables
wal_level = archive # minimal, archive, or hot_standby
*archive_mode = off* # allows archiving to be done
--
Regards,
Raghavendra Rao J S V
Thanks a lot.
On Mon, 15 Oct 2018 at 14:43, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> wrote:
> On Mon, 15 Oct 2018 09:46:47 +0200
> Laurenz Albe wrote:
>
> > Raghavendra Rao J S V wrote:
> > > Is there any impact if "#wal_keep_segments = 0 "
Hi All,
We are using *pg_dump *backup utility in order to take the backup of the
database. Unfortunately,it is taking around 24hrs of time to take the
backup of 28GB database. Please guide me how to reduce the time and is
there any parameter need to be modified which will help us to reduce the
b
and alter the table column to modify the
datatype of the columns. Am I correct? or is there any other way to resolve
it.
--
Regards,
Raghavendra Rao J S V
In my application, the idle sessions are consuming cpu and ram. refer the
ps command output.
How idle session will consume more ram/cpu?
How to control it?
We are using Postgresql 9.2 with Centos 6 os. Please guide me.
[image: image.png]
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
its
dependent tables using *pg_dump *command.
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
Hi All,
We are using Postgresql 9.2 database.
In one of the transactional table, I have observed duplicate values for the
primary key columns.
Please guide me how is it possible and how to fix this kind of issue.
--
Regards,
Raghavendra Rao J S V
through port 5433(PGBOUNCER port) we are receiving
below error. Please guide me.
/opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x
--format=tar --gzip --compress=1 --pgdata=- -D /opt/rao
*pg_basebackup: could not connect to server: ERROR: Unsupported startup
parameter
to PG. They are not supported through PGBouncer.
>
>
>
> *From:* Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
> *Sent:* Monday, April 8, 2019 9:21 AM
> *To:* pgsql-general@lists.postgresql.org
> *Subject:* Getting error while running the pg_basebackup through PGB
dministrator command
pg_dump: could not open large object 59087743: FATAL: terminating
connection due to administrator command
FATAL: terminating connection due to administrator command
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
). Please guide me
what are the configuration parameters need to modify to reduce the time
taken by the pg_basebackup utility.
Is there any possibility to exclude the index data while taking the
pg_basebackup?
--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425
ement.
$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=6 --pgdata=- -D /opt/backup_db
On Fri, Jan 12, 2018 at 6:37 PM, Stephen Frost wrote:
> Greetings,
>
> * Raghavendra Rao J S V (raghavendra...@gmail.com) wrote:
> > We have database with the siz
value?
Please let me know what does this means.
*Please don't top-post on the PG mailing lists.*
*How to get clarifications on my query?*
On Sat, Jan 13, 2018 at 9:52 PM, Stephen Frost wrote:
> Greetings,
>
> Please don't top-post on the PG mailing lists.
>
> * Raghavend
I am looking for the help to minimise the time taken by the pg_basebackup
utility.
As informed Earlier we are taking the backup of the database using
pg_basbackup utility using below command.
$PGHOME/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar --gzip
--compress=6 --pgdata=- -D
HEN OTHERS THEN
RAISE NOTICE 'Error occurred while executing
pop_new_deviceid_for_table for % table % %', p_table,SQLERRM,
SQLSTATE;
PERFORM insert_log('ERROR' ,'pop_new_deviceid_for_table' ,'Error
occurred while executing pop_endpoints
101 - 186 of 186 matches
Mail list logo