Re: txid_current vs xact_commit stats

2024-10-09 Thread Eddie Mishler



On 10/8/2024 23:39, Laurenz Albe wrote:

On Wed, 2024-10-09 at 04:22 +, senor wrote:

I was under the impression that all transactions must end with a commit or a
rollback but watching stats doesn't support this. Web searches tend to return
info on what a transaction is or the ratio of commits to rollbacks. I found
nothing contradicting what I think I know.

The rollback can be implicit, for example when you terminate the connection or
crash the server...

Also, PostgreSQL has autocommit, so every data modifying statement that's not
in an explicit transaction will implicitly commit at the end of the statement.
This part I'm aware of but I assume whether implicit or explicit it 
would show up in the stats I'm monitoring.

I've sampled pg_stat_database.xact_commit, pg_stat_database.xact_rollback and
txid_current() at intervals on a few independent clusters and see that commits
increase anywhere from 50% to 300% of the rate of transaction increase. Rollback
remains very near zero for all clusters. Each cluster tends to stay consistently
within a range (i.e. 120-130% or 50-70%).

Perhaps what I wrote above explains that.
How is it that I'm seeing more commits than transactions? I thought I 
should see something as simple as commits + rollbacks = transactions.  
In some cases I'm seeing 3 times as many commits. I'm not aware of 
multiple commits possible per transaction or commits without a 
transaction.  There doesn't seem to be a problem other than my 
understanding.

PG version 11 & 12 on Linux

That's too old.

100% agree.

Yours,
Laurenz Albe


Thank you Laurenz for the response. Your replies to my previous posts 
helped to identify there is an ongoing issue with the stats collection 
in some clusters. I'm looking forward to an upgrade to PG v16 to manage 
that.  This current question came up while looking into that but I don't 
think it's involved.


Thanks,

Senor





Re: Disk is filling up with large files. How can I clean?

2024-10-09 Thread Sergey Fukanchik
Another possibility is orphaned files.
See 
https://www.dbi-services.com/blog/can-there-be-orphaned-data-files-in-postgresql/
and https://github.com/bdrouvot/pg_orphaned
--
Sergey

On Wed, 9 Oct 2024 at 19:02, Greg Sabino Mullane  wrote:
>
> On Wed, Oct 9, 2024 at 4:10 AM Mikael Petterson  
> wrote:
>>
>> Hi,
>>
>> I find our disk is filling up.
>>
>> sudo find /var/lib -type f -size +100M -exec ls -lh {} \; | awk '{ print $9 
>> ": " $5 }'
>
> ...
>
> Those files only add up to about 30GB. That's pretty small these days : time 
> for a bigger disk? Or perhaps the space is elsewhere: probably want to do 
> something like
> sudo find / -xdev -maxdepth 2 -exec du -sh {} \; | grep -E 'G|M' | sort -g
>
> Cheers,
> Greg
>


-- 
Sergey




Re: Questions about document "Concurrenry control" section

2024-10-09 Thread Laurenz Albe
On Wed, 2024-10-09 at 18:07 +0800, iseki zero wrote:
> I found it difficult to understanding some paragraph in the document 
> "Concurrency control" section.
> 
>  > The Repeatable Read mode provides a rigorous guarantee that each 
> transaction sees a completely stable view of the database. However, this 
> view will not necessarily always be consistent with some serial (one at 
> a time) execution of concurrent transactions of the same level. For 
> example, even a read-only transaction at this level may see a control 
> record updated to show that a batch has been completed but/not/see one 
> of the detail records which is logically part of the batch because it 
> read an earlier revision of the control record. Attempts to enforce 
> business rules by transactions running at this isolation level are not 
> likely to work correctly without careful use of explicit locks to block 
> conflicting transactions.
> 
> At: 
> https://www.postgresql.org/docs/17/transaction-iso.html#XACT-REPEATABLE-READ:~:text=The%20Repeatable%20Read%20mode,to%20block%20conflicting%20transactions.
> 
> Specifically, I can't understand the example.

I think this is referring to this example:
https://wiki.postgresql.org/wiki/SSI#Deposit_Report

Yours,
Laurenz Albe




Re: pg_dump throwing segfault error during backup

2024-10-09 Thread Adrian Klaver

On 10/7/24 22:42, Cassandra Thrift wrote:

Hello Team,

We are facing a segfault issue while initiating backup with pg_dump.
This is for Postgresql community version 12.5 and rhel 7.4 (PostgreSQL 
12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red 
Hat 4.8.5-39), 64-bit)


FYI, Postgres 12 goes EOL in one month.

What repo are you getting the Postgres package from, RH's or 
community(PGDG).


If community see:

https://yum.postgresql.org/news/rhel7-end-of-life/






We fond the timing which shows database backup is getting invoked at the 
time. So we had to stop backup (pg_backup)


I am not familiar with pg_backup, is that an actual program or our you 
using it as alias for something else?




we are manually taking table data currently.


Show the exact command/process you are using to do above.



Core dump also generated however RedHat refused to analyze this saying 
it is third party product.


Kindly let us know if this is a bug for the version as it is too old and 
if the core dump can be analyzed somehow as it is in binary.


The most recent minor version of Postgres 12 is .20. It would not be a 
bad idea to upgrade to that for immediate needs and then start testing 
newer Postgres versions for a change in major version to get to a 
version that will be community supported for some time.




Regards
Somraj Chakrabarty


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Disk is filling up with large files. How can I clean?

2024-10-09 Thread Adrian Klaver

On 10/8/24 04:06, Mikael Petterson wrote:

Hi,
I find our disk is filling up.


Query the pg_stat_activity view:

https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW

and look for rows that have with state showing 'idle in transaction'.

Report back what you find.

You should also check out this post:

https://www.cybertec-postgresql.com/en/reasons-why-vacuum-wont-remove-dead-rows/



What can I do to clean up. We are using pgsql 12.
Br,
//Mikael




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Disk is filling up with large files. How can I clean?

2024-10-09 Thread Greg Sabino Mullane
On Wed, Oct 9, 2024 at 4:10 AM Mikael Petterson 
wrote:

> Hi,
>
> I find our disk is filling up.
>
> sudo find /var/lib -type f -size +100M -exec ls -lh {} \; | awk '{ print
> $9 ": " $5 }'
>
...

Those files only add up to about 30GB. That's pretty small these days :
time for a bigger disk? Or perhaps the space is elsewhere: probably want to
do something like
sudo find / -xdev -maxdepth 2 -exec du -sh {} \; | grep -E 'G|M' | sort -g

Cheers,
Greg


Questions about document "Concurrenry control" section

2024-10-09 Thread iseki zero

Hello,

I found it difficult to understanding some paragraph in the document 
"Concurrency control" section.


> The Repeatable Read mode provides a rigorous guarantee that each 
transaction sees a completely stable view of the database. However, this 
view will not necessarily always be consistent with some serial (one at 
a time) execution of concurrent transactions of the same level. For 
example, even a read-only transaction at this level may see a control 
record updated to show that a batch has been completed but/not/see one 
of the detail records which is logically part of the batch because it 
read an earlier revision of the control record. Attempts to enforce 
business rules by transactions running at this isolation level are not 
likely to work correctly without careful use of explicit locks to block 
conflicting transactions.


At: 
https://www.postgresql.org/docs/17/transaction-iso.html#XACT-REPEATABLE-READ:~:text=The%20Repeatable%20Read%20mode,to%20block%20conflicting%20transactions.


Specifically, I can't understand the example. Why in an earlier 
revision, the control record show that the batch has been completed? In 
my recognization, the control record state transation should be 
"running" -> "completed". And only after all batch operation completed 
then the control record will be changed to "completed".


The another big problem is, I interpret the whole batch operation is in 
one transaction. So, we can read updates from another uncommited yet 
transaction?? I read the front paragraph(Read Committed), the locking 
behaviour will effect the read view, and I understand the behaviour 
might still be exists in Repeatable Read. But the example said, 
read-only transaction.


Thank you

iseki zero





Re: Disk is filling up with large files. How can I clean?

2024-10-09 Thread Torsten Förtsch
Filenames like 16665, 16665.1, 16665.2 etc all represent the same table (or
similar). The number 16665 is called the file node.

To get a list of file nodes for a specific database you can run:

SELECT oid::regclass::text, relfilenode FROM pg_class;

The /16384/ in the path represents the database. To decipher that you can
run:

SELECT datname, oid FROM pg_database;

Once you have all that information, you know which database to connect to
and which tables are big. Then you can DROP/DELETE/TRUNCATE or so.

Useful functions in that context are pg_relation_size(), pg_table_size(),
pg_total_relation_size() and some more. Check out the manual for more
information.

Example:
SELECT oid::regclass::text, relfilenode, pg_relation_size(oid) as size FROM
pg_class ORDER BY size DESC;


On Wed, Oct 9, 2024 at 10:10 AM Mikael Petterson <
mikaelpetter...@hotmail.com> wrote:

> Hi,
>
> I find our disk is filling up.
>
> sudo find /var/lib -type f -size +100M -exec ls -lh {} \; | awk '{ print
> $9 ": " $5 }'
> /var/lib/rpm/Packages: 278M
> /var/lib/pgsql/12/data/base/16384/16583: 392M
> /var/lib/pgsql/12/data/base/16384/16586: 1.0G
> /var/lib/pgsql/12/data/base/16384/16588: 173M
> /var/lib/pgsql/12/data/base/16384/16633: 106M
> /var/lib/pgsql/12/data/base/16384/16644: 179M
> /var/lib/pgsql/12/data/base/16384/16659: 1.0G
> /var/lib/pgsql/12/data/base/16384/16662: 438M
> /var/lib/pgsql/12/data/base/16384/16665: 1.0G
> /var/lib/pgsql/12/data/base/16384/16667: 1.0G
> /var/lib/pgsql/12/data/base/16384/16668: 1.0G
> /var/lib/pgsql/12/data/base/16384/16780: 466M
> /var/lib/pgsql/12/data/base/16384/16786: 182M
> /var/lib/pgsql/12/data/base/16384/16788: 163M
> /var/lib/pgsql/12/data/base/16384/16789: 315M
> /var/lib/pgsql/12/data/base/16384/16790: 126M
> /var/lib/pgsql/12/data/base/16384/16665.2: 403M
> /var/lib/pgsql/12/data/base/16384/16586.7: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.6: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.9: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.8: 1.0G
> /var/lib/pgsql/12/data/base/16384/16659.6: 1.0G
> /var/lib/pgsql/12/data/base/16384/16659.4: 1.0G
> /var/lib/pgsql/12/data/base/16384/16659.5: 1.0G
> /var/lib/pgsql/12/data/base/16384/16668.3: 586M
> /var/lib/pgsql/12/data/base/16384/16586.10: 458M
> /var/lib/pgsql/12/data/base/16384/16659.1: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.2: 1.0G
> /var/lib/pgsql/12/data/base/16384/16659.2: 1.0G
> /var/lib/pgsql/12/data/base/16384/16668.1: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.3: 1.0G
> /var/lib/pgsql/12/data/base/16384/16659.3: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.4: 1.0G
> /var/lib/pgsql/12/data/base/16384/16665.1: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.5: 1.0G
> /var/lib/pgsql/12/data/base/16384/16586.1: 1.0G
> /var/lib/pgsql/12/data/base/16384/16668.2: 1.0G
> /var/lib/pgsql/12/data/base/16384/16667.1: 741M
> /var/lib/pgsql/12/data/base/16384/4652676: 502M
> /var/lib/pgsql/12/data/base/16384/4652688: 155M
>
> How can I clean up?
>
> What can I do to clean up. We are using pgsql 12.
>
> Br,
>
> //Mikael
>
>
>
>
>
>


Disk is filling up with large files. How can I clean?

2024-10-09 Thread Mikael Petterson
Hi,

I find our disk is filling up.

sudo find /var/lib -type f -size +100M -exec ls -lh {} \; | awk '{ print $9 ": 
" $5 }'
/var/lib/rpm/Packages: 278M
/var/lib/pgsql/12/data/base/16384/16583: 392M
/var/lib/pgsql/12/data/base/16384/16586: 1.0G
/var/lib/pgsql/12/data/base/16384/16588: 173M
/var/lib/pgsql/12/data/base/16384/16633: 106M
/var/lib/pgsql/12/data/base/16384/16644: 179M
/var/lib/pgsql/12/data/base/16384/16659: 1.0G
/var/lib/pgsql/12/data/base/16384/16662: 438M
/var/lib/pgsql/12/data/base/16384/16665: 1.0G
/var/lib/pgsql/12/data/base/16384/16667: 1.0G
/var/lib/pgsql/12/data/base/16384/16668: 1.0G
/var/lib/pgsql/12/data/base/16384/16780: 466M
/var/lib/pgsql/12/data/base/16384/16786: 182M
/var/lib/pgsql/12/data/base/16384/16788: 163M
/var/lib/pgsql/12/data/base/16384/16789: 315M
/var/lib/pgsql/12/data/base/16384/16790: 126M
/var/lib/pgsql/12/data/base/16384/16665.2: 403M
/var/lib/pgsql/12/data/base/16384/16586.7: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.6: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.9: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.8: 1.0G
/var/lib/pgsql/12/data/base/16384/16659.6: 1.0G
/var/lib/pgsql/12/data/base/16384/16659.4: 1.0G
/var/lib/pgsql/12/data/base/16384/16659.5: 1.0G
/var/lib/pgsql/12/data/base/16384/16668.3: 586M
/var/lib/pgsql/12/data/base/16384/16586.10: 458M
/var/lib/pgsql/12/data/base/16384/16659.1: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.2: 1.0G
/var/lib/pgsql/12/data/base/16384/16659.2: 1.0G
/var/lib/pgsql/12/data/base/16384/16668.1: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.3: 1.0G
/var/lib/pgsql/12/data/base/16384/16659.3: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.4: 1.0G
/var/lib/pgsql/12/data/base/16384/16665.1: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.5: 1.0G
/var/lib/pgsql/12/data/base/16384/16586.1: 1.0G
/var/lib/pgsql/12/data/base/16384/16668.2: 1.0G
/var/lib/pgsql/12/data/base/16384/16667.1: 741M
/var/lib/pgsql/12/data/base/16384/4652676: 502M
/var/lib/pgsql/12/data/base/16384/4652688: 155M

How can I clean up?

What can I do to clean up. We are using pgsql 12.

Br,

//Mikael








Re: Disk is filling up with large files. How can I clean?

2024-10-09 Thread Philip Semanchuk



> On Oct 9, 2024, at 5:52 AM, Torsten Förtsch  wrote:
> 
> Filenames like 16665, 16665.1, 16665.2 etc all represent the same table (or 
> similar). The number 16665 is called the file node.
> 
> To get a list of file nodes for a specific database you can run:
> 
> SELECT oid::regclass::text, relfilenode FROM pg_class;
> 
> The /16384/ in the path represents the database. To decipher that you can run:
> 
> SELECT datname, oid FROM pg_database;
> 
> Once you have all that information, you know which database to connect to and 
> which tables are big. Then you can DROP/DELETE/TRUNCATE or so.

Mikael, if you’re unaware of VACUUM FULL (as opposed to just VACUUM), you 
should read about that too.

Hope that helps,
Philip



pg_dump throwing segfault error during backup

2024-10-09 Thread Cassandra Thrift
Hello Team,

We are facing a segfault issue while initiating backup with pg_dump.
This is for Postgresql community version 12.5 and rhel 7.4 (PostgreSQL 12.5
on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat
4.8.5-39), 64-bit)

The below error is showing :

from /var/log/message

[root@USMS-PV-CRMDB2 log]# grep 'segfault' messages*
messages:Sep 24 00:15:56 USMS-PV-CRMDB2 kernel: postgres[1535]: segfault at
23c ip 7f272c91f61d sp 7ffe22636718 error 6 in libc-2.17.so
[7f272c7d3000+1b8000]
messages:Sep 25 00:14:59 USMS-PV-CRMDB2 kernel: postgres[4245]: segfault at
23c ip 7f272c91f61d sp 7ffe22636718 error 6 in libc-2.17.so
[7f272c7d3000+1b8000]
messages-20240923:Sep 21 00:07:13 USMS-PV-CRMDB2 kernel: postgres[14743]:
segfault at 23c ip 7f272c91f622 sp 7ffe22636718 error 6 in
libc-2.17.so[7f272c7d3000+1b8000]
messages-20240923:Sep 22 00:06:53 USMS-PV-CRMDB2 kernel: postgres[392]:
segfault at 23c ip 7f272c91f61d sp 7ffe22636718 error 6 in
libc-2.17.so[7f272c7d3000+1b8000]
messages-20240923:Sep 23 00:16:07 USMS-PV-CRMDB2 kernel: postgres[28843]:
segfault at 23c ip 7f272c91f622 sp 7ffe22636718 error 6 in
libc-2.17.so[7f272c7d3000+1b8000]

>From db log :

2024-09-26 00:47:05.312 +08 [9908] HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2024-09-26 00:47:05.306 +08 [10870] WARNING:  terminating connection
because of crash of another server process
2024-09-26 00:47:05.306 +08 [10870] DETAIL:  The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.

We fond the timing which shows database backup is getting invoked at the
time. So we had to stop backup (pg_backup)

we are manually taking table data currently.

Core dump also generated however RedHat refused to analyze this saying it
is third party product.

Kindly let us know if this is a bug for the version as it is too old and if
the core dump can be analyzed somehow as it is in binary.

Regards
Somraj Chakrabarty


Re: pg_dump throwing segfault error during backup

2024-10-09 Thread Ron Johnson
On Wed, Oct 9, 2024 at 4:08 AM Cassandra Thrift 
wrote:

> Hello Team,
>
> We are facing a segfault issue while initiating backup with pg_dump.
> This is for Postgresql community version 12.5 and rhel 7.4 (PostgreSQL
> 12.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat
> 4.8.5-39), 64-bit)
>
> The below error is showing :
>
> from /var/log/message
>
> [root@USMS-PV-CRMDB2 log]# grep 'segfault' messages*
> messages:Sep 24 00:15:56 USMS-PV-CRMDB2 kernel: postgres[1535]: segfault
> at 23c ip 7f272c91f61d sp 7ffe22636718 error 6 in libc-2.17.so
> [7f272c7d3000+1b8000]
> messages:Sep 25 00:14:59 USMS-PV-CRMDB2 kernel: postgres[4245]: segfault
> at 23c ip 7f272c91f61d sp 7ffe22636718 error 6 in libc-2.17.so
> [7f272c7d3000+1b8000]
> messages-20240923:Sep 21 00:07:13 USMS-PV-CRMDB2 kernel: postgres[14743]:
> segfault at 23c ip 7f272c91f622 sp 7ffe22636718 error 6 in
> libc-2.17.so[7f272c7d3000+1b8000]
> messages-20240923:Sep 22 00:06:53 USMS-PV-CRMDB2 kernel: postgres[392]:
> segfault at 23c ip 7f272c91f61d sp 7ffe22636718 error 6 in
> libc-2.17.so[7f272c7d3000+1b8000]
> messages-20240923:Sep 23 00:16:07 USMS-PV-CRMDB2 kernel: postgres[28843]:
> segfault at 23c ip 7f272c91f622 sp 7ffe22636718 error 6 in
> libc-2.17.so[7f272c7d3000+1b8000]
>
> From db log :
>
> 2024-09-26 00:47:05.312 +08 [9908] HINT:  In a moment you should be able
> to reconnect to the database and repeat your command.
> 2024-09-26 00:47:05.306 +08 [10870] WARNING:  terminating connection
> because of crash of another server process
> 2024-09-26 00:47:05.306 +08 [10870] DETAIL:  The postmaster has commanded
> this server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
>
> We fond the timing which shows database backup is getting invoked at the
> time. So we had to stop backup (pg_backup)
>
> we are manually taking table data currently.
>
> Core dump also generated however RedHat refused to analyze this saying it
> is third party product.
>
> Kindly let us know if this is a bug for the version as it is too old and
> if the core dump can be analyzed somehow as it is in binary.
>

PG 12.5 is missing *four years* of patches, and RHEL 7.4 is also far behind
the current 7.9.

https://www.postgresql.org/docs/release/ tells you what's been done to v12
in every patch release.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 crustacean!


Re: Disk is filling up with large files. How can I clean?

2024-10-09 Thread Ron Johnson
On Wed, Oct 9, 2024 at 9:02 AM Philip Semanchuk <
phi...@americanefficient.com> wrote:

>
> > On Oct 9, 2024, at 5:52 AM, Torsten Förtsch 
> wrote:
> >
> > Filenames like 16665, 16665.1, 16665.2 etc all represent the same table
> (or similar). The number 16665 is called the file node.
> >
> > To get a list of file nodes for a specific database you can run:
> >
> > SELECT oid::regclass::text, relfilenode FROM pg_class;
> >
> > The /16384/ in the path represents the database. To decipher that you
> can run:
> >
> > SELECT datname, oid FROM pg_database;
> >
> > Once you have all that information, you know which database to connect
> to and which tables are big. Then you can DROP/DELETE/TRUNCATE or so.
>
> Mikael, if you’re unaware of VACUUM FULL (as opposed to just VACUUM), you
> should read about that too.
>

VACUUM FULL with an almost-full disk probably isn't the wisest idea.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 crustacean!