PostgreSQL Cascade streaming replication problem
Hi everyone, i have one master and two slaves - cascading replication. Master -> Slave1 -> Slave2 Slave2 crashed two days ago. Problem is on Slave1, where are comulating WAL files. Archive mode is turn off. There is about 6.000 WAL Files, but in folder archive_status are files with .done name. How can I remove WAL files, which are "waiting" for Slave2? I changed configration on Slave1 (i commented this options "wal level = hot_standby" and "max_wal_senders = 1"). But without effect. What did i do wrong? I need stop comulating WAL files and remove old WAL files, which are processed and are only "waiting".. Thank you for your tips. Regards, Jakub
Re: Please suggest the best suited unit test frame work for postgresql database.
On Sun, 1 Apr 2018 10:26:32 +0530 Raghavendra Rao J S V wrote: > Good morning. > > Please suggest the best suited unit test frame work for postgresql > database and also shared the related documents to understand the > framework. If anyone there knows Perl (if not I'll be happy to teach it) you can use DBI with Object::Exercise to validate anything. The module documentation includes DBI examples or I can send you a few. One nice thing about Perl is that you can inlcude source code from a dozen or so other languages from Python to Java that will be auto-wrapped and run automatically. This makes it really easy to test multi-language interfaces to the database with a single harness. enjoi -- Steven Lembark 1505 National Ave Workhorse Computing Rockford, IL 61103 lemb...@wrkhors.com+1 888 359 3508
Re: Please suggest the best suited unit test frame work for postgresql database.
On 04/02/2018 10:58 AM, Steven Lembark wrote: On Sun, 1 Apr 2018 10:26:32 +0530 Raghavendra Rao J S V wrote: Good morning. Please suggest the best suited unit test frame work for postgresql database and also shared the related documents to understand the framework. If anyone there knows Perl (if not I'll be happy to teach it) That's just being mean ;)
How to get an inclusive interval when using daterange
Hi, I have a table* availability* which contains 3 columns: * id, product_id and period_availability(type daterange).* When I insert a data into this table I use this query: insert into availability values ('product x', daterange('2018-02-02', '2018-03-01', '[]') So I get a period like this: [2018-02-02, 2018-03-02) In my app I tried to subtract a day from the period when I got it from database.I'm using daterange and not tsrange because the daterange gives me less execution time when I make tests with a huge number of data.So is there any way to get an inclusive interval with daterange or I have to use tsrange to get inclusive intervals?
Re: How to get an inclusive interval when using daterange
hmidi slim writes: > When I insert a data into this table I use this query: > insert into availability values ('product x', daterange('2018-02-02', > '2018-03-01', '[]') > So I get a period like this: [2018-02-02, 2018-03-02) Yup. > In my app I tried to subtract a day from the period when I got it from > database.I'm using daterange and not tsrange because the daterange gives me > less execution time when I make tests with a huge number of data.So is > there any way to get an inclusive interval with daterange or I have to use > tsrange to get inclusive intervals? No, daterange will always canonicalize a range into '[)' format. This is explained (perhaps not with adequate clarity) in https://www.postgresql.org/docs/10/static/rangetypes.html#RANGETYPES-DISCRETE The key reason why is to make it clearer which range specifications are equal. For instance, it's not really clear whether ['2018-02-02','2018-03-01'] and ['2018-02-02','2018-03-02') represent the same set of values --- they do if it's a daterange, but not if it's a tsrange. Canonicalizing makes equal ranges look equal. regards, tom lane
Re: PostgreSQL Cascade streaming replication problem
On Sun, Apr 01, 2018 at 06:26:51PM +, Jakub Janeček wrote: > What did i do wrong? I need stop comulating WAL files and remove old WAL > files, which are processed and are only "waiting".. Perhaps wal_keep_segments is set and you forgot about it or you used a replication slot that you forgot to drop on slave 1? -- Michael signature.asc Description: PGP signature
Re: PostgreSQL Cascade streaming replication problem
I have set up wal_keep_segments = 100 on Slave1, but no effect. I tried comment wal_keep_segments, no effect too... Actually i have 8.500 WAL Files, disk space is low. I tried drop replication slot on Slave1, no effect. Is possible to manually remove WAL files? Thanks *Jakub* 2018-04-03 4:43 GMT+02:00 Michael Paquier : > On Sun, Apr 01, 2018 at 06:26:51PM +, Jakub Janeček wrote: > > What did i do wrong? I need stop comulating WAL files and remove old WAL > > files, which are processed and are only "waiting".. > > Perhaps wal_keep_segments is set and you forgot about it or you used a > replication slot that you forgot to drop on slave 1? > -- > Michael >
Re: PostgreSQL Cascade streaming replication problem
Hi everyone, problem is solved. There was a delay between setup (drop replication slot) and execution. Jakub 2018-04-03 4:43 GMT+02:00 Michael Paquier : > On Sun, Apr 01, 2018 at 06:26:51PM +, Jakub Janeček wrote: > > What did i do wrong? I need stop comulating WAL files and remove old WAL > > files, which are processed and are only "waiting".. > > Perhaps wal_keep_segments is set and you forgot about it or you used a > replication slot that you forgot to drop on slave 1? > -- > Michael >
Autovacuum Problem
Hello List, I inherited a machine with an Postgres Database, which I am fighting with right now. It seems as if the Database is growing bigger and bigger over time. Once in a while I have to run a VACUUM FULL statement on a few tables, which then releases a lot of space to the OS. By reading the documentation, I found out that there should be a Autovacuum job running which is supposed to do this automatically for me. I can see it running here and there, first ANALYZING and the VACUUM: SELECT pid, query FROM pg_stat_activity WHERE query LIKE 'autovacuum: %'; pid | query ---+- 15456 | autovacuum: VACUUM table1 16782 | autovacuum: VACUUM table2 (2 rows) And it seems to complete as well: select relname,last_vacuum, last_autovacuum, last_analyze, vacuum_count, autovacuum_count,last_autoanalyze from pg_stat_user_tables where relname like 'event%' order by relname ASC; relname | last_vacuum | last_autovacuum | last_analyze | vacuum_count | autovacuum_count | last_autoanalyze -+-+---+--+--+--+--- table1 | | 2018-04-03 02:11:41.983786+02 | | 0 | 89 | 2018-04-03 01:42:41.12331+02 table2 | | 2018-04-03 07:58:19.478713+02 | | 0 | 1696 | 2018-04-03 07:59:01.567328+02 However no space seems to be freed to the system. Is there any way a bloody newbie can debug this behaviour? Any help is appreciated! Thanks Stefan
Re: Autovacuum Problem
On 03/04/2018 09:36, Kein Name wrote: However no space seems to be freed to the system. Is there any way a bloody newbie can debug this behaviour? VACUUM <> VACUUM FULL Normally running VACUUM via autovacuum should help reuse free space but not actually return it to the filesystem / OS (unless it happens to be the last blocks in the data file(s)). Ppl in normal/average type of installations/workloads no longer (since 8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where e.g. need for free space is urgent. Any help is appreciated! Thanks Stefan -- Achilleas Mantzios IT DEV Lead IT DEPT Dynacom Tankers Mgmt