sequence id overflow ERROR using timescaledb

2022-08-04 Thread abrahim abrahao

 
 I am using timescaledb version  2.7.2, and PostgreSQL 12.11 (Ubuntu 
12.11-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
I tried to compress a chuck using the compress_chunk function and running a job 
as well, and I got "sequence id overflow" message ERROR.Any idea how to fix it 
or why I got this error?Note: I compressed other chuckles without problem.
   SELECT  'set temp_file_limit =-1; SELECT compress_chunk(''' || 
chunk_schema|| '.' || chunk_name || ''');'   
 FROM timescaledb_information.chunks
  WHERE   is_compressed =false;
                                         ?column?                               
           
---
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_2_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_3_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_4_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_5_chunk');
 set temp_file_limit =-1; SELECT 
compress_chunk('_timescaledb_internal._hyper_1_8_chunk');


SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk');
DEBUG:  building index "pg_toast_29929263_index" on table "pg_toast_29929263" 
serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_ivehicleid__" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_gid__ts_meta" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_irowversion_" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_gdiagnostici" on table 
"compress_hyper_3_12_chunk" serially
DEBUG:  building index 
"compress_hyper_3_12_chunk__compressed_hypertable_3_gcontrolleri" on table 
"compress_hyper_3_12_chunk" serially
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.58", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.86", size 103432192
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.85", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.84", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.83", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.82", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.81", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.80", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.79", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.78", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.77", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.76", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.75", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.74", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.73", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.72", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.71", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.70", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.69", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.68", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.67", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.66", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.65", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.64", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.63", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.62", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.61", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.60", size 1073741824
LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp713.59", size 1073741824
ERROR:  sequence id overflow
Time: 1105092.862 ms (18:25.093)



SET client_min_messages TO DEBUG1; CALL run_job(1000);
SET
Time: 1.120 ms
DEBUG:  Executing policy_compression with parameters {"hypertable_id": 1, 
"compress_after": "28 days"}
DEBUG:  building index "pg_toast_29929276_index" on table "pg_toast_29929276" 
serially
DEBUG:  building index 
"compress_hyper_3_13_chunk__compressed_hypertable_3_ivehicleid__" on table 
"compress_hyper_3_13_chunk" serially
DEBUG:  building index 
"compress_hyper_3_13_chunk__compressed_hypertable_3_gid__ts_meta" on table 
"compress_hyper_3_13_chunk" serially
DEBUG:  building index 
"compress_hyper_3_13_chunk__compressed_hypertable_3_ir

vacuum freeze wait_event BufferPin

2024-01-31 Thread abrahim abrahao
 There is a three-day vacuum freeze on a partition table with wait_event = 
BufferPin, no transactions (active or idle) older than it that are not blocked, 
but at least one select query is executing at all times related of this 
partition table. 
 This behaviour happens with some partitions, but not always. Running simply 
vacuum works because it does not wait for BufferPin. I just expect to see 
vacuum freeze wait_event = BufferPin if there is any old transaction related to 
the table, am I wrong? Is there a wait to figure out which session vacuum 
freeze to wait for?
 Below is a list of all sessions at a particular moment in time; at this point, 
there are only Select queries with query starts older than the vacuum freeze, 
and no cursor, transaction block, or zombie sessions.


  SELECT pg.query_start,  now() -  query_start AS duration, pg.pid pg_pid, 
pg.state,  pg.wait_event, backend_type  ,pg.wait_event_type, 
pg_blocking_pids(pid) as blocked_by,SPLIT_PART(trim(pg.query),' ', 1)FROM 
pg_stat_activity pg order by now() -  query_start desc;          query_start    
                        |        duration              | pg_pid | state  |      
wait_event               |         backend_type               | wait_event_type 
| blocked_by | 
split_part--++---++--+--+---++
                                                        |                       
            |    249    |           | AutoVacuumMain        | autovacuum 
launcher           | Activity                | {}         |                     
                                   |                                   |    252 
   |           | LogicalLauncherMain  | logical replication launcher   | 
Activity                | {}         |                                          
               |                                   |    247    |           | 
BgWriterMain               | background writer                | Activity        
        | {}         |                                                         
|                                   |    250    |           | ArchiverMain      
          | archiver                                | Activity                | 
{}         |                                                         |          
                         |    246    |           | CheckpointWriteDelay | 
checkpointer                         | Timeout              | {}         |      
                                                   |                            
       |    248    |           | WalWriterMain             | walwriter          
                     | Activity        | {}         | 2024-01-27 
23:10:54.159577+00 | 3 days 18:55:48.         | 271039 | active| BufferPin      
                | client backend                      | BufferPin             | 
{}         | vacuum 2024-01-30 21:33:50.262265+00 | 20:32:52.490309        | 
558810 | idle     | ClientRead                  | client backend                
      | Client          | {}         | SELECT 2024-01-31 
16:53:17.695929+00 | 01:13:25.056645        | 641053 | idle     | ClientRead    
              | client backend                      | Client          | 
{}         | SELECT 2024-01-31 18:04:48.929097+00 | 00:01:53.823477        | 
645812 | active |                                     | client backend          
            |                            | {}         | SELECT 2024-01-31 
18:05:02.221272+00 | 00:01:40.531302        | 645814 | active |                 
                    | client backend                      |             
    | {}         | SELECT 2024-01-31 18:05:04.927131+00 | 00:01:37.825443   
     | 645925 | active |                                     | client backend   
                   |                 | {}         | SELECT 
2024-01-31 18:05:12.088875+00 | 00:01:30.663699        | 645109 | active |      
                               | client backend                      |          
       | {}         | SELECT 2024-01-31 18:05:30.202305+00 | 
00:01:12.550269        | 645695 | active |                                     
| client backend                      |                 | {}        
 | SELECT 2024-01-31 18:05:47.872601+00 | 00:00:54.879973        | 645924 | 
active |                                     | client backend                   
   |                 | {}         | SELECT 2024-01-31 
18:06:16.425286+00 | 00:00:26.327288        | 645593 | active |                 
                    | client backend                      |         
        | {}         | SELECT 2024-01-31 18:06:18.286905+00 | 00:00:24.465669   
     | 646174 | idle     | ClientRead                  | client backend         
             | Client       

Re: vacuum freeze wait_event BufferPin

2024-02-02 Thread abrahim abrahao
Thanks Greg,  I really appreciated you message.I executed the query you shared, 
and it is showing exactly the same type of lock you talked, it help me a lot. 
ThanksIt is a usual behavior in some busy databases, I am trying to avoid 
cancel sessions. I would like also double check my understanding about locks on 
this documentation ( 
https://www.postgresql.org/docs/14/explicit-locking.html)Based on my 
understanding  on table 13.2  SHARE UPDATE EXCLUSIVE (ShareUpdateExclusiveLock) 
should not be blocked by ACCESS SHARE (AccessShareLock). Am I wrong about it? 
If I am not wrong why it still locking it? 
Note: Information below come from the link above.Table-Level Lock Modes   
   - SHARE UPDATE EXCLUSIVE (ShareUpdateExclusiveLock)
   
   - Conflicts with the SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, 
EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against 
concurrent schema changes and VACUUM runs.
   - Acquired by VACUUM (without FULL), ANALYZE, CREATE INDEX CONCURRENTLY, 
CREATE STATISTICS, COMMENT ON, REINDEX CONCURRENTLY, and certain ALTER INDEX 
and ALTER TABLE variants (for full details see the documentation of these 
commands).
   
   - ACCESS SHARE (AccessShareLock)
   
   - Conflicts with the ACCESS EXCLUSIVE lock mode only.
   - The SELECT command acquires a lock of this mode on referenced tables. In 
general, any query that only reads a table and does not modify it will acquire 
this lock mode.


Note: This information below come from another database,  the first lock was 
done.select pid, mode, query_start, SPLIT_PART(trim(query),' ', 1)  from 
pg_locks join pg_stat_activity using (pid) where relation::regclass::text = ' 
mytable' order by 3;  pid  |           mode           |          query_start    
      | 
split_part---+--+---+-
   376 | AccessShareLock          | 2024-02-02 08:11:08.938949+00 | SELECT   
508 | ShareUpdateExclusiveLock | 2024-02-02 08:11:17.822287+00 | vacuum 52767 | 
AccessShareLock          | 2024-02-02 19:43:40.110489+00 | SELECT 53137 | 
AccessShareLock          | 2024-02-02 19:44:19.331633+00 | SELECT 53460 | 
AccessShareLock          | 2024-02-02 19:54:00.315714+00 | SELECT 54203 | 
AccessShareLock          | 2024-02-02 19:54:39.449686+00 | SELECT 53164 | 
AccessShareLock          | 2024-02-02 20:01:26.429547+00 | SELECT 54002 | 
AccessShareLock          | 2024-02-02 20:01:32.749586+00 | SELECT 53583 | 
AccessShareLock          | 2024-02-02 20:01:34.624046+00 | SELECT





On Friday, February 2, 2024 at 01:37:19 p.m. EST, Greg Sabino Mullane 
 wrote:  
 
 On Wed, Jan 31, 2024 at 2:05 PM abrahim abrahao  wrote:

 There is a three-day vacuum freeze on a partition table with wait_event = 
BufferPin, no transactions (active or idle) older than it that are not blocked, 
but at least one select query is executing at all times related of this 
partition table. ... 

Is there a wait to figure out which session vacuum freeze to wait for?

The vacuum needs a chance to get in and make changes to the table, but it's not 
being blocked at the traditional lock level that shows up in pg_blocking_pids. 
You can see what is going on with this:
select pid, mode, query_start, query from pg_locks join pg_stat_activity using 
(pid) where relation::regclass::text = 'mytable' order by 3;

That may show your vacuum process with a ShareUpdateExclusiveLock and some 
other processes with other locks, probably AccessShareLock. Those other pids 
need to all finish or be killed - and not have any overlap between them. In 
other words, that vacuum process needs to have exclusive access to the table 
for a split second, no matter if the other process locked the table before or 
after the vacuum started. One crude solution would be to cancel any other 
existing backends interested in that table:
select pg_cancel_backend(pid), now()-query_start, query from pg_locks join 
pg_stat_activity using (pid) where relation::regclass::text = 'mytable' and 
lower(query) !~ 'vacuum';

Not a good long-term solution, but depending on how often the table is updated, 
you might have other options. Perhaps disable  autovacuum for this table and do 
a manual vacuum (e.g. in a cron script) that kills the other backends as per 
above, or runs during a time with not-constant reads on the table. Or have 
something that is able to pause the application. Or if this is a partitioned 
table that might get dropped in the future or at least not queried heavily, do 
not worry about vacuuming it now.
Cheers,Greg
  

Re: vacuum freeze wait_event BufferPin

2024-02-07 Thread abrahim abrahao

Thanks again Greg, I really appreciated all information.


On Friday, February 2, 2024 at 08:16:41 p.m. EST, Greg Sabino Mullane 
 wrote:  
 
 On Fri, Feb 2, 2024 at 3:25 PM abrahim abrahao  wrote:

 SHARE UPDATE EXCLUSIVE (ShareUpdateExclusiveLock) should not be blocked by 
ACCESS SHARE (AccessShareLock). Am I wrong about it? If I am not wrong why it 
still locking it? 

Those locks with no other context are somewhat  of a red herring. The important 
part is not that the AccessShare is somehow blocking ShareUpdateExclusive, but 
that the ShareUpdateExclusive process is NOT blocking new AccessShare 
processes! In the internals of postgres, vacuumlazy.c tries to grab a buffer 
lock (different concept from all the user-visible locks above). It politely 
tries to wait[1] until nobody else is grabbing it (aka pinning it), then 
proceeds. The problem is that other processes are allowed to come along and put 
a pin in it as well - the vacuum's shareupdateexclusive lock does not prevent 
that.
So the timeline is:
Process X runs a long select and pins the buffer
Process V runs a vacuum freeze and tries to lock the buffer. It detects other 
pins, so it waits. It assumes that whoever is holding the pin will release it 
someday.Process Y runs another long select and also pins the buffer.Process X 
ends, and removes its pins.Process V still cannot move - it just knows there 
are still pins. Where they come from does not matter.
As long as there is at least one other process holding a pin, the vacuum freeze 
cannot continue[2].
That's my understanding of the code, anyway. This could be argued as a bug. I 
am not sure what a solution would be. Cancelling user queries just for a vacuum 
would not be cool, but we could maybe pause future pin-creating actions somehow?
For the time being, forcing a super-quick moment of no table access would seem 
to be your best bet, as described earlier.
Cheers,Greg
[1] 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/vacuumlazy.c;hb=HEAD#l975
See backend/storage/buffer/bufmgr.c for LockBufferForCleanup()
[2] Quick duplication script:drop table if exists foobar;
create table foobar as select 1 as id;
alter table foobar set (autovacuum_enabled = off);
update foobar set id = id;

Process 1:begin; select *, pg_sleep(11) from foobar;

Process 2:vacuum(freeze,verbose) foobar; /* blocked */

Process 3:begin; select *, pg_sleep(33) from foobar;

Run in order. Kill Process 1 and Process 2 is still blocked. Kill Process 3 and 
Process 2 finished the vacuum.Note that a regular vacuum (without a freeze) 
will not get blocked.
Cheers,Greg

  
  

error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

2025-07-08 Thread abrahim abrahao
I got error “server process was terminated by signal 11: Segmentation fault” 
using pg_create_logical_replication_slot with pgoutput plugin parameter and 
using test_decoding worked fine, any idea that is wrong?

Note: I am using docker container and I also updated shm-size from 1024mb to 2g 
and I am using shared_buffers=1.5GB. This is a test server and there is nothing 
else running. IT is the first time I am working with logical replication.
See details below
postgresql.conf file:
wal_level = logical max_replication_slots = 10  max_wal_senders = 
20listen_addresses = '*'


 psql -U postgres -h postgres -c "SELECT 
pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');"
SSL SYSCALL error: EOF detected
connection to server was lost


< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 
0 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > 
LOG:  Initializing CDC decoder
< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 
0 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > 
STATEMENT:  SELECT pg_create_logical_replication_slot('support7561_repslot', 
'pgoutput');
< 2025-07-08 14:57:08.821 UTC  0 2025-07-08 14:55:38 UTC 923 
686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG:  server process (PID 1096) was 
terminated by signal 11: Segmentation fault
< 2025-07-08 14:57:08.821 UTC  0 2025-07-08 14:55:38 UTC 923 
686d316a.39b 2025-07-08 14:57:08.821 UTC > DETAIL:  Failed process was running: 
SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC  0 2025-07-08 14:55:38 UTC 923 
686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG:  terminating any other active 
server processes
< 2025-07-08 14:57:08.829 UTC  0 2025-07-08 14:55:38 UTC 923 
686d316a.39b 2025-07-08 14:57:08.829 UTC > LOG:  all server processes 
terminated; reinitializing
< 2025-07-08 14:57:09.215 UTC  0 2025-07-08 14:57:09 UTC 1098 
686d31c5.44a 2025-07-08 14:57:09.215 UTC > LOG:  database system was 
interrupted; last known up at 2025-07-08 14:55:39 UTC
< 2025-07-08 14:57:10.037 UTC [unknown] postgres postgres 172.18.0.217(33506)  
57P03 2025-07-08 14:57:10 UTC 1101 686d31c6.44d 2025-07-08 14:57:10.037 UTC > 
FATAL:  the database system is in recovery mode
< 2025-07-08 14:57:10.437 UTC  0 2025-07-08 14:57:09 UTC 1098 
686d31c5.44a 2025-07-08 14:57:10.437 UTC > LOG:  database system was not 
properly shut down; automatic recovery in progress
< 2025-07-08 14:57:10.450 UTC  0 2025-07-08 14:57:09 UTC 1098 
686d31c5.44a 2025-07-08 14:57:10.450 UTC > LOG:  redo starts at 1FB9/CA0
< 2025-07-08 14:57:10.456 UTC  0 2025-07-08 14:57:09 UTC 1098 
686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG:  invalid record length at 
1FB9/C054DF8: wanted 24, got 0
< 2025-07-08 14:57:10.456 UTC  0 2025-07-08 14:57:09 UTC 1098 
686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG:  redo done at 1FB9/C054DC0 
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
< 2025-07-08 14:57:10.475 UTC  0 2025-07-08 14:57:09 UTC 1099 
686d31c5.44b 2025-07-08 14:57:10.475 UTC > LOG:  checkpoint starting: 
end-of-recovery immediate wait
< 2025-07-08 14:57:10.501 UTC  0 2025-07-08 14:57:09 UTC 1099 
686d31c5.44b 2025-07-08 14:57:10.501 UTC > LOG:  checkpoint complete: wrote 86 
buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.010 s, 
sync=0.007 s, total=0.028 s; sync files=18, longest=0.003 s, average=0.001 s; 
distance=339 kB, estimate=339 kB
< 2025-07-08 14:57:10.510 UTC  0 2025-07-08 14:55:38 UTC 923 
686d316a.39b 2025-07-08 14:57:10.510 UTC > LOG:  database system is ready to 
accept connections


 psql -U postgres -h postgres -c "SELECT 
pg_create_logical_replication_slot('support7561_repslot', 'test_decoding');"
 pg_create_logical_replication_slot

 (support7561_repslot,1FB9/C081668)
(1 row)

postgres@support7560_postgres:/var/lib/postgresql/15/main$ psql -U postgres -h 
postgres -c "SELECT slot_name, plugin, slot_type, database, active, 
restart_lsn, confirmed_flush_lsn FROM pg_replication_slots;"
  slot_name  |plugin | slot_type | database | active | 
restart_lsn  | confirmed_flush_lsn
-+---+---+--++--+-
 support7561_repslot | test_decoding | logical   | postgres | f  | 
1FB9/C081630 | 1FB9/C081668


SHOW shared_buffers;
 shared_buffers

 1532512kB
(1 row)

postgres=# \! uname -a
Linux support7560_postgres 6.8.0-1030-gcp #32~22.04.1-Ubuntu SMP Tue Apr 29 
23:17:09 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

psql -U postgres -h postgres -c "select version()"
   version

Re: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

2025-07-10 Thread abrahim abrahao
 Thanks  Hayato and Shlok, The Citus extension package is installed, but it is 
not preload on shared_preload_libraries and citus extesion is not created.I 
will create a new container without Citus extension package and adding stack 
trace ( I think this is the one you're talking about) as soon as possible and I 
will update here as soon I complete the test.
See information below.
show shared_preload_libraries;   
shared_preload_libraries--- pg_stat_statements, 
pg_repack(1 row)
\dx                                            List of installed extensions     
   Name        | Version |   Schema   |                              
Description+-++
 btree_gist         | 1.7     | public     | support for indexing common 
datatypes in GiST ltree              | 1.2     | public     | data type for 
hierarchical tree-like structures pg_stat_statements | 1.10    | public     | 
track planning and execution statistics of all SQL statements executed pg_trgm  
          | 1.5     | public     | text similarity measurement and index 
searching based on trigrams pgcrypto           | 1.3     | public     | 
cryptographic functions plpgsql            | 1.0     | pg_catalog | PL/pgSQL 
procedural language postgis            | 3.5.1   | public     | PostGIS 
geometry and geography spatial types and functions uuid-ossp          | 1.1     
| public     | generate universally unique identifiers (UUIDs)(8 rows)

Steps done until pg_create_logical_replication_slot command (just the steps, 
does not include the full command)
set wal_level, max_replication_slots, max_wal_senders and listen_addresses      
   name          | setting---+- listen_addresses    
  | * max_replication_slots | 10 max_wal_senders       | 20 wal_level           
  | logical
Changed pg_hba filerestart databasepg_ctl restart -D $POSTGRESQL_DATA
create a user "CREATE USER user_rep WITH REPLICATION ENCRYPTED PASSWORD"ALTER 
DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA myg GRANT SELECT ON TABLES TO 
user_rep;CREATE PUBLICATION myg_pub FOR TABLES IN SCHEMA myg;ALTER PUBLICATION 
myg_pub ADD TABLE myg
SELECT snapshot_name FROM pg_create_logical_replication_slot

On Wednesday, July 9, 2025 at 10:19:07 p.m. EDT, Hayato Kuroda (Fujitsu) 
 wrote:  
 
 Dear Shlok, Abrahim,

> Also, I was going to the logs on found:
> > > < 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414)
> SELECT 0 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08
> 14:57:08.653 UTC > LOG:  Initializing CDC decoder
> 
> This log is not present in Postgres source code. Why is this log appearing 
> here?

I found the output in Citus source code [1]. So, I'm afraid that you may load 
the
shared library provided by Citus when you created the replication slot.

If so, Citus community may be the better place to discuss the bug.
We can help if you can reproduce the bug by the PostgreSQL core codes.

[1]: 
https://github.com/citusdata/citus/blob/5deaf9a61673e10c183b6d4f13593f168e1c2c10/src/backend/distributed/cdc/cdc_decoder.c#L85

Best regards,
Hayato Kuroda
FUJITSU LIMITED

  

Re: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

2025-07-11 Thread abrahim abrahao
 Hi Hayato and Shlok, I confirmed that it is related of Citus, everytrhing 
worked after remove the Citus instalation from the docker image.
I did not added stack trace yet on the new instalation. It seems that the 
present Citus installation was done in an unusual way. I will work to figure 
out a better way to install it.

 Thanks for your help, I appreciate it.


On Thursday, July 10, 2025 at 08:10:18 p.m. MDT, Hayato Kuroda (Fujitsu) 
 wrote:  
 
 Dear Abrahim

> The Citus extension package is installed, but it is not preload on 
> shared_preload_libraries
> and citus extesion is not created.

It is possible that a shared library is loaded even if shared_preload is not set
and CREATE EXTENSION is not executed. Per my understanding the specified plugin
name would be searched by the same rule as other libraries. See [1].

Another example is 'test_decoding'. It is a sample plugin which postgres-core
includes. Anyone can use the plugin via SQL function. CREATE EXTENSION is not 
needed.

```
postgres=# SELECT pg_create_logical_replication_slot('slot', 'test_decoding');
 pg_create_logical_replication_slot 

 (slot,0/1829CE0)
(1 row)
```

> I will create a new container without Citus extension package ...

Yeah, it is quite helpful to understand the issue correctly. Thanks for working 
on it.

[1]: https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-C-DYNLOAD

Best regards,
Hayato Kuroda
FUJITSU LIMITED