Wal files in /pgsql/14/main/pg_wal not removed

2021-12-26 Thread Lucas
Hi guys, and happy holidays.

My /pgsql/14/main/pg_wal directory is filling up. The server is not respecting 
the wal_keep_size = 100GB. I have no replication slots in this server, and I 
have already restarted PostgreSQL. 

I don’t know what else to look for. Would appreciate any help.

My postgresql.conf can be found below:

data_directory = '/pgsql/14/main'
hba_file = '/etc/postgresql/14/main/pg_hba.conf'
ident_file = '/etc/postgresql/14/main/pg_ident.conf'
external_pid_file = '/var/run/postgresql/14-main.pid'
unix_socket_directories = '/var/run/postgresql'

# SSL
ssl = on
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'

listen_addresses = '*'
port = 5432
random_page_cost = 1.1
max_connections = 400
shared_buffers = 3840MB
statement_timeout = 0
work_mem = 209719830kB
maintenance_work_mem = 960MB
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
track_activity_query_size = 102400
synchronous_commit = off
synchronous_standby_names = '*'
wal_buffers = 16MB
wal_level = replica

# PG 14 new parameters
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
max_parallel_maintenance_workers = 1

# Replication parameters
max_replication_slots = 15
hot_standby_feedback = on
hot_standby = on
fsync = on
wal_keep_size = 100GB

restore_command = 'cp /data/wal_archive/%f %p'

checkpoint_completion_target = 0.9
archive_mode = on
archive_command = '/pgsql/pg-archive-wal-to-slaves.sh "%p"'
archive_timeout = 300
max_wal_senders = 50
effective_cache_size = 11520MB
logging_collector = on
log_directory = '/data/postgresql/log'
log_filename = 'postgresql.log.%a'
log_rotation_age = 1440
log_rotation_size = 0
log_truncate_on_rotation = on
log_min_duration_statement = 1000
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_lock_waits = on
log_statement = 'ddl'
log_timezone = 'UTC'
stats_temp_directory = '/var/run/postgresql/main.pg_stat_tmp'
autovacuum = on
log_autovacuum_min_duration = 1000
autovacuum_max_workers = 5
autovacuum_naptime = 40s
autovacuum_vacuum_threshold = 200
autovacuum_analyze_threshold = 150
autovacuum_vacuum_scale_factor = 0.02
autovacuum_analyze_scale_factor = 0.005
datestyle = 'iso, mdy'
timezone = 'UTC'
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'
default_text_search_config = 'pg_catalog.english'
deadlock_timeout = 2s
max_files_per_process = 4096
effective_io_concurrency = 200
default_statistics_target = 100

# https://dba.stackexchange.com/a/280727
max_standby_streaming_delay = 30s

Thanks!
Lucas

Re: storing zipped SQLite inside PG ?

2021-12-26 Thread amihay gonen
Is sqlite is a constraint?

>From big data presctive is better to find a more natural format like
parquet or even json. For example what if you've different sqlite version

בתאריך יום ה׳, 23 בדצמ׳ 2021, 17:02, מאת Дмитрий Иванов ‏<
firstdis...@gmail.com>:

> Or, if you want to extend this theme, you can use a PostgreSQL-based
> "SQLite file player" with
> PostgreSQL + Python[sqlite3] extension.This way you can provide direct
> access to SQLite files without duplicating data in PostgreSQL cluster
> tables.
> PS: It may seem that this will reduce performance. When I started my
> project, I had some preconceptions about Python. But analyzing projects
> like Patroni changed my mind.
> --
> Regards, Dmitry!
>
>
> ср, 22 дек. 2021 г. в 10:24, David G. Johnston  >:
>
>> On Tue, Dec 21, 2021 at 10:06 PM David Gauthier 
>> wrote:
>>
>>> I'll have to read more about sqlite_fdw. Thanks for that Steve !
>>>
>>> Each SQLite isn't that big (billions of records), more like 30K records
>>> or so.  But there are lots and lots of these SQLite DBs which add up over
>>> time to perhaps billions of records.
>>>
>>> This is for a big corp with an IT dept.  Maybe I can get them to upgrade
>>> the DB itself.
>>> Thank You too David !
>>>


>> So, more similar to the image storage question than I first thought, but
>> still large enough where the specific usage patterns and needs end up being
>> the deciding factor (keeping in mind you can pick multiple solutions - so
>> that really old data, ideally on a partition, can be removed from the DB
>> while still remaining accessible if just more slowly or laboriously).
>>
>> One possibility to consider - ditch the SQLite dependency and just store
>> CSV (but maybe with a funky delimiter sequence).  You can then us
>> "string_to_table(...)" on that delimiter to materialize a table out of the
>> data right in a query.
>>
>> David J.
>>
>>


Re: Wal files in /pgsql/14/main/pg_wal not removed

2021-12-26 Thread Tom Lane
Lucas  writes:
> My /pgsql/14/main/pg_wal directory is filling up. The server is not 
> respecting the wal_keep_size = 100GB. I have no replication slots in this 
> server, and I have already restarted PostgreSQL. 
> I don’t know what else to look for. Would appreciate any help.

> archive_mode = on
> archive_command = '/pgsql/pg-archive-wal-to-slaves.sh "%p"'

You have archive_mode on, so WAL files will not be removed until
your archive_command reports success.  Poke around in what that's
doing.

regards, tom lane




Re: Wal files in /pgsql/14/main/pg_wal not removed

2021-12-26 Thread Lucas


> On 27/12/2021, at 5:14 AM, Tom Lane  wrote:
> 
> Lucas  writes:
>> My /pgsql/14/main/pg_wal directory is filling up. The server is not 
>> respecting the wal_keep_size = 100GB. I have no replication slots in this 
>> server, and I have already restarted PostgreSQL. 
>> I don’t know what else to look for. Would appreciate any help.
> 
>> archive_mode = on
>> archive_command = '/pgsql/pg-archive-wal-to-slaves.sh "%p"'
> 
> You have archive_mode on, so WAL files will not be removed until
> your archive_command reports success.  Poke around in what that's
> doing.
> 
>   regards, tom lane

Thanks for your reply.

I removed my script and set an archive_command with a sample cp and it’s 
working now. Indeed my script has some issues.

Thanks again!
Lucas