Re: Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or near "merge".
Vinodh NV wrote: > Sir, > > I am facing the error Caused by: org.postgresql.util.PSQLException: ERROR: > syntax error at or near "merge". when I execute the below query in > postgres database. The same works fine in Oracle. > > merge into net n using dual on (n.id=:id) when matched then update set > Status=:status, lastStatusDate=:lastStatusDate, errorMessage=:errorMessage, > errorDetails=:errorDetails when not matched then insert (id,Status, > lastStatusDate, errorMessage, errorDetails) values > (:id,:status,:lastStatusDate,:errorMessage,:errorDetails)"; > > > Can you please let me know how to resolve it? Easy -- just don't use merge with Postgres, as it doesn't support MERGE yet. There's work ongoing to support it, but it will appear in Postgres 11 at the earliest. There exists a nonstandard command INSERT ON CONFLICT DO UPDATE which would probably be useful. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Help troubleshooting SubtransControlLock problems
Laurenz Albe wrote: > Scott Frazer wrote: > > Hi, we have a Postgres 9.6 setup using replication that has recently > > started seeing a lot of processes stuck in > > "SubtransControlLock" as a wait_event on the read-replicas. Like this, only > > usually about 300-800 of them: > > > > > > 179706 | LWLockNamed | SubtransControlLock I think you could get in this situation if the range of open transactions exceeds what fits in the buffers for subtrans.c pages, and the subtransaction cache overflows (64 entries apiece; PGPROC_MAX_CACHED_SUBXIDS in src/include/storage/proc.h). Each page is 2048 transactions (4 bytes per xact, 8192 bytes per page; SUBTRANS_XACTS_PER_PAGE in src/backend/access/transam/subtrans.c), and we keep 32 pages (NUM_SUBTRANS_BUFFERS src/include/access/subtrans.h). So if your oldest transaction is over 64k XIDs old, and you have transactions with more than 64 subtransactions, you get in trouble. A simple solution is to increase NUM_SUBTRANS_BUFFERS and recompile. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Help troubleshooting SubtransControlLock problems
Scott Frazer wrote: > Currently this seems to be happening when the server just starts taking > connections. It doesn't even need time to scale up anymore (previously it > took about 8 hours for the problem to re-appear) > > It's only happening on the read replicas, though. I've just set my master > to handle all the traffic, but that's not really sustainable I failed to notice at start of thread that these were replicas. I suppose then you would have to be thinking about the range of XIDs in the primary server. Maybe you left some old transaction open, or an uncommitted prepared transaction there? If not, I suppose you're up for some "perf" profiling ... -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: ERROR could not access transaction/Could not open file pg_commit_ts
Jeremy Finzel wrote: > UPDATE: what is actually failing is a call to > pg_xact_commit_timestamp(xmin) on a given table under the view. We still > think we must have some corruption though with pg_commit_ts. This is not a valid query, because the xmin may belong to a frozen tuple. pg_commit_ts does not keep data forever; old files are removed. Anything older than datfrozenxid could return that error. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
Jeremy Finzel wrote: > Getting some concerning errors in one of our databases that is on 9.5.11, > on autovacuum from template0 database pg_authid and pg_auth_members. I > only saw some notes on the list about this error related to materialized > views. FWIW, we did use pg_upgrade to upgrade this database from 9.4 to > 9.5. Here is an example: > > 2018-03-19 12:08:33.946 CDT,,,14892,,5aafee91.3a2c,1,,2018-03-19 12:08:33 > CDT,59/340951,0,ERROR,XX001,"found xmin 2906288382 from before relfrozenxid > 740087784","automatic vacuum of table > ""template0.pg_catalog.pg_authid""""" Can you please supply output of pg_controldata? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
I admit I'm pretty surprised by this whole episode. I have no useful advice to offer here. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [GENERAL] missing public on schema public
Was this ever fixed? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Can PostgreSQL create new WAL files instead of reusing old ones?
David Pacheco wrote: > tl;dr: We've found that under many conditions, PostgreSQL's re-use of old > WAL > files appears to significantly degrade query latency on ZFS. The reason is > complicated and I have details below. Has it been considered to make this > behavior tunable, to cause PostgreSQL to always create new WAL files > instead of re-using old ones? I don't think this has ever been proposed, because there was no use case for it. Maybe you want to work on a patch for it? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Long running INSERT+SELECT query
Vitaliy Garnashevich wrote: > > > Have not worked through all of the above, but a first draft suggestion: > > > > Move the SELECT minus the aggregation functions into a sub-query that > > uses FOR UPDATE. Then do the aggregation on the results of the > > sub-query. > > The aggregated table has hundreds of millions of rows, and the query runs > for many hours (which is one of the reasons why it's better not to fail). I > really doubt that row level locking would work. That would be a lot of RAM > just to hold all the locks. Row locks are not stored in memory. Of course, a FOR KEY SHARE lock would block DELETEs that try to remove the locked row. I think your proposed strategy of trying to merge what other processes did while you were copying is very problematic. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Index/trigger implementation for accessing latest records
Alastair McKinley wrote: > create function latest_record_update() returns trigger as > $$ > BEGIN > update location_records set latest_record = false where person_id = > new.person_id and latest_record is true and id != new.id; > return new; > END; > $$ language plpgsql; > > create trigger latest_record_trigger before insert on location_records > for each row execute procedure latest_record_update(); For maximum safety, you should use BEFORE trigger only to modify the row being inserted/updated (or to abort the operation); any change to other rows should occur in an AFTER trigger. One risk associated with failing to do that is that some other BEFORE trigger further modifies the new row, making your trigger-invoked UPDATE wrong. Also, be sure to think about possible infinite recursion. Another angle to keep in mind is what happens with insertions of historical records, i.e. those that are not latest (today you think "ah, but that never happens" and three months from now this is requested as a feature. Time to rethink the whole development ...) You'd clobber the latest_record flag without a replacement for it, which is probably undesirable. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: relkind='p' has no pg_stat_user_tables
Adrian Klaver wrote: > and from here: > > https://www.postgresql.org/docs/10/static/sql-createtable.html > > "A partitioned table is divided into sub-tables (called partitions), which > are created using separate CREATE TABLE commands. The partitioned table is > itself empty. A data row inserted into the table is routed to a partition > based on the value of columns or expressions in the partition key. ... " Yeah, but I think Justin has a valid question from the POV of the user: how can we figure out if we need to re-run analyze on a partitioned table, if the time of last analyze is not stored anywhere? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Control PhoneNumber Via SQL
On 2018-May-15, John McKown wrote: > I don't have any code for you, if that is what you are soliciting. I did > find a couple of informative web sites which help explain how international > phone numbers are formatted. These are known as E.164 numbers. Michael Glaesemann wrote a e.164 datatype years ago, which I later tweaked slightly: https://github.com/alvherre/e164 I don't know its status -- not sure if it even compiles. But it might be a useful as a starting point. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: pg_multixact/members growing
On 2018-May-23, Tom Lane wrote: > Tiffany Thang writes: > > Where do I find pg_controldata? I could not locate it on the file system. > > Hmm, should be one of the installed PG executables. > > > pg_clog/ or pg_subtrans/ or pg_multixact/offsets/ are getting larger too > > but by only a few hundreds MBs. > > This is consistent with the idea that something is holding back the > oldest-transaction horizon. (However, I'd think you'd also be having > serious problems with table bloat if that were true, so it's a bit > mystifying.) Did you check for unclosed prepared transactions? Another option is that you have a standby server with hot_standby_feedback enabled, and an open transaction there. I'm not sure to what extent it is possible for that to cause multixact problems, but it wouldn't hurt to check. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
Hmm .. surely diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 5016181fd7..5d7fa1fb45 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -6690,7 +6690,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, xid = HeapTupleHeaderGetXmin(tuple); xmin_frozen = ((xid == FrozenTransactionId) || HeapTupleHeaderXminFrozen(tuple)); - if (TransactionIdIsNormal(xid)) + if (!xmin_frozen && TransactionIdIsNormal(xid)) { if (TransactionIdPrecedes(xid, relfrozenxid)) ereport(ERROR, ?? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
On 2018-May-24, Andres Freund wrote: > FWIW, even if that weren't the case: a) there'd be a lot more wrong with > this routine imo. b) some of the tuples affected clearly weren't > frozen... Right. BTW is it just a coincidence or are all the affected tables pg_authid? Maybe the problem is shared relations ..? Maybe the fact that they have separate relfrozenxid (!?) in different databases? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
On 2018-May-24, Andres Freund wrote: > On 2018-05-24 13:08:53 -0400, Alvaro Herrera wrote: > > Hmm .. surely > > xid = HeapTupleHeaderGetXmin(tuple); > > xmin_frozen = ((xid == FrozenTransactionId) || > >HeapTupleHeaderXminFrozen(tuple)); > > - if (TransactionIdIsNormal(xid)) > > + if (!xmin_frozen && TransactionIdIsNormal(xid)) > I don't think that's necesary - HeapTupleHeaderGetXmin() returns > FrozenTransactionId if the tuple is frozen (note the > HeapTupleHeaderXminFrozen() within). Ah, yeah ... I probably thought about this when writing it and removed it for that reason. BTW I think the definition of HeapTupleHeaderXminFrozen is seriously confusing, by failing to return true if the xmin is numerically FrozenXid (which it'll be if the database was pg_upgraded). I wonder about this one in HeapTupleSatisfiesMVCC: else { /* xmin is committed, but maybe not according to our snapshot */ if (!HeapTupleHeaderXminFrozen(tuple) && XidInMVCCSnapshot(HeapTupleHeaderGetRawXmin(tuple), snapshot)) return false; /* treat as still in progress */ } I think this is not a bug only because XidInMVCCSnapshot does this /* Any xid < xmin is not in-progress */ if (TransactionIdPrecedes(xid, snapshot->xmin)) return false; which makes it return false for FrozenXid, but seems more of an accident than explicitly designed. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: found xmin from before relfrozenxid on pg_catalog.pg_authid
On 2018-May-24, Andres Freund wrote: > On 2018-05-24 16:49:40 -0400, Alvaro Herrera wrote: > > BTW is it just a coincidence or are all the affected tables pg_authid? > > Maybe the problem is shared relations ..? Maybe the fact that they have > > separate relfrozenxid (!?) in different databases? > > Yes, that appears to be part of the problem. I've looked at a number of > shared relation related codepaths, but so far my theory is that the > relcache is wrong. Note that one of the reports in this thread clearly > had a different relcache relfrozenxid than in the catalog. Hmm ... is that because they read the values on different databases? Are you referring to the reports by Maxim Boguk? I see one value from template1, another value from template0. > Then there's also: > http://archives.postgresql.org/message-id/1527193504642.36340%40amazon.com ah, so deleting the relcache file makes the problem to go away? That's definitely pretty strange. I see no reason for the value in relcache to become out of step with the catalogued value in the same database ... I don't think we transmit in any way values of one database to another. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: existence of a savepoint?
On 2018-May-27, Stuart McGraw wrote: > Is there some way to to test if a savepoint of a given name > exists? Or better yet, the number of stacked savepoints of > that name? What is the use case for this? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: existence of a savepoint?
On 2018-May-29, Stuart McGraw wrote: > Alternatively if there were a setting to tell Postgresql to > follow the SQL standard behavior of overwriting rather stacking > savepoints, that too would also solve my current problem I think. > Perhaps it is just my limited experience but the former behavior > has always seemed more useful in practice than the latter. I think if what we're doing breaks the semantics of the SQL spec, we're definitely open to changing our behavior. But that wouldn't solve your problem today. What I think could solve your problem today is a C-language extension that uses xact.c callbacks in order to expose a list that you can query from user space. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Code of Conduct plan
On 2018-Jun-05, Gavin Flower wrote: > If we are all adults, then we don't need a CoC. "We're all adults" is wishful thinking. Some old people are just kids who aged but didn't actually mature. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Code of Conduct plan
On 2018-Jun-06, David G. Johnston wrote: > On the topic of privacy - who exactly, from an administrative aspect, has > access to the systems that house these kinds of confidential > communications? Do these emails end up in PostgreSQL.org servers long-term > or is it mainly transient distribution and only individual's personal email > accounts, with whatever hosting provider they choose, hold the messages > long-term? postgresql.org does not host personal email accounts, with a few exceptions. Most of these exceptions are actually just forwards to mailboxes elsewhere, so the traffic stays in the relevant postgresql.org server very briefly. The few accounts that that are actual mailboxes in postgresql.org are, as far as I know, only country-specific accounts for advocacy, not personal points of contact. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Performance problem postgresql 9.5
On 2018-Jun-08, Miguel Angel Sanchez Sandoval wrote: > Hi guys, migrate from 8.4 to 9.5, all OK except that 2-3 days pass and the > database experiences slowness, I execute the linux top command and it shows > me a postgres user process executing a strange command (2yhdgrfrt63788) > that I consume a lot of CPU, I see the querys active and encounter select > fun ('./ 2yhdgrfrt63788') , this did not happen in version 8.4, any help > would appreciate it. Hmm, has your database been compromised? You may have an intruder there -- beware. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: ALTER TABLE SET (toast.asdf) is not rejected
On 2018-Jun-11, Justin Pryzby wrote: > I noticed that this is accepted: > > postgres=# ALTER TABLE t SET (toast.asdf=128); > ALTER TABLE > > I thought since "toast" was a core namespace, it would've been rejected? > > I recall having read a discussion about verifying these ... I wasn't able > to find what I was thinking of, but found this one. > https://www.postgresql.org/message-id/flat/20090114144332.GF24156%40alvh.no-ip.org Uh. ISTM that this was lost during the 9.4 cycle, because that command *is* rejected in 9.3. alvherre=# ALTER TABLE t SET (toast.asdf=128); ERROR: unrecognized parameter "asdf" but 9.4 takes it. Weird. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Impact of multixact "members" limit exceeded
On 2018-Jun-13, Daniel Lagerman wrote: > Hello! > > I have a question where I'm trying to determine the seriousness of the > problem. What's your server version? The way to reclaim members space is to move forwards the pg_database.datminmxid value from the database where it is oldest; and the way to move *that* forward is to advance the pg_class.relminmxid value from the tables in which it is oldest. So you need to set a low value vacuum_multixact_freeze_min_age and _table_age and vacuum those tables. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Impact of multixact "members" limit exceeded
Hello Daniel The advice from Lawrence downthread is very much on point. If you can upgrade to 9.5, do it. He mentions savepoints, but one more case is plpgsql blocks with EXCEPTION clauses. On 2018-Jun-13, Daniel Lagerman wrote: > Hello Álvaro, > > I'm running at 9.4.3, I know its an older version but upgrading it outside > the scope right now for various reasons. Okay. There's at least one related bug fix in 9.4.4 (b6a3444fa635) but also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better served by an upgrade anyway. > Based on the settings I posted what would be your recommended settings > right now to resolve this situation? Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan tables fully) and vacuum_multixact_freeze_min_age to some value like one million (so that they remove most of the oldest multixacts, leaving just the frontmost one million). Then vacuum the tables with the highest multixact ages. Make sure the datminmxid advances in all databases (incl. postgres template0 template1); once it has advanced, the oldest member files are removed. Don't waste time processing tables with datminmxid higher than the minimum, as that won't free up any member space. > I very much appreciate the recommendation but I would also like to now the > impact of this warning, data right now does not seem to be corrupted, get > overwritten, for that matter not insert new data or cause issues when read. Corruption should not occur because 9.4.3 already contains the member wraparound protections (commit 3ecab37d97ed). > It just seems that is has stopped creating "members" and I can't read into > what problem that would cause, table was around 1.6 TB, it was cleaned up > and the re-packed using pg-repack to go down to around 400 GB in size, > however a freeze vacuum or autovacuum did not complete. Uhh .. pg_repack has been known to cause catalog corruption. I don't know if those bugs have been fixed. At 2ndQuadrant we recommend *not* running pg_repack. > Then when it comes to vacuum, do you think its worth increasing work_mem to > say 200 GB, work_mem does not affect vacuum. maintenance_work_mem is the one to change; autovacuum_work_mem can be used for autovacuum. > for the maintenance period only, and let autovacuum go to town > on the tables with the highest age and lowest minmixid? Or should one run > for example flexible freeze instead with zero pauses as nothing is > happening on the weekends, which means no inbound transactions. "Zero pauses" (vacuum_cost_delay=0) is best, because it's obviously much faster. > I have tried running autovacuum for 40 hours before with 5 GB work_mem, on > the weekend, across 6 tables at the same time and it never completed on the > bigger tables which are around 2 TB. So what did you do, cancel it? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Impact of multixact "members" limit exceeded
On 2018-Jun-14, Daniel Lagerman wrote: > > Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan > > tables fully) and vacuum_multixact_freeze_min_age to some value like one > > million (so that they remove most of the oldest multixacts, leaving > > just the frontmost one million). Then vacuum the tables with the > > highest multixact ages. Make sure the datminmxid advances in all > > databases (incl. postgres template0 template1); once it has advanced, > > the oldest member files are removed. Don't waste time processing tables > > with datminmxid higher than the minimum, as that won't free up any > > member space. > > Thanks, we opted to go with 1 million min and 100 million table age, the > tables in question are all way over that anyway. We completed vacuum on all > but two which are larger tables and I'm running Vacuum there right now > after I also bumped the maintenance work mem a bit. As this is 9.4 I can't > see the progress except that I noted that initialy it uses less CPU and > once it seems to finnaly start working it uses more CPU 10%->50% but that > is fine. So my hope is that the Vacuum completes over night. Did you observe whether the vacuumed tables' relminmxid advance? If it didn't, then those vacuums are a waste of time. Note that freeze_table_age affects when vacuum does a full table scan vs. when it skips pages due to visibility map. A vacuum that does a partial scan does *not* advance the relfrozenxid / relminmxid; only full-table scan vacuums can do that. (Except in 9.6.) If the relminmxid ages are younger than your 100 million table_age, vacuum won't do a full-table scan. Cheers -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Is there a way to be notified on the CREATE TABLE execution?
On 2018-Jun-20, Igor Korot wrote: > [quote] > In order to create an event trigger, you must first create a function > with the special return type event_trigger. This function need not > (and may not) return a value; the return type serves merely as a > signal that the function is to be invoked as an event trigger. > [/quote] > > So, the function has to be created and compiled. Event trigger functions can be written in plpgsql. You can use pg_event_trigger_ddl_commands() in the ddl_command_end event to obtain a few tidbits about the DDL that was just executed. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Sv: Re: Sv: Re: CTE optimization fence
On 2018-Jun-27, Andreas Joseph Krogh wrote: > I know. I hate the INLINE proposal and hope default-behaviour will be like in > other DBs, inline like sub-query as default. GUC for preserving fence is what > I > hope will happen. I am probably biased ;-) but I like my proposal #1 in this email https://www.postgresql.org/message-id/20170503173305.fetj4tz7kd56tjlr%40alvherre.pgsql where you have to mark queries were the current behavior is desired (WITH MATERIALIZED), and leave all other queries alone. I don't remember any dissent on that. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: dumping only table definitions
On 2018-Jun-29, Kevin Brannen wrote: > I'm trying to figure out how to dump only the table definitions, well those > and things they need directly, like sequences & types. What I do NOT want are > all the millions (not literally but it feels like it :)) of functions we > have. Triggers would be all right if I must, as we only have a few of those. Try "pg_dump -Fc" followed by pg_restore -l. You can edit the list emitted there, then use it with pg_restore -L. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: ddl_command_end not firing DROP events
On 2018-Jul-08, Demitri Muna wrote: Hi > I’ve created a trigger where I want to capture schema-changing events. > I’m finding that DROP events are not being triggered when using > “ddl_command_end". The test case below demonstrates this. I am running > PostgreSQL 10.4. The trigger is fired for the CREATE event, but not > DROP TYPE or DROP TABLE. > > I came across the same question on the mailing list, but the solution > was to use the “sql_drop” trigger instead. > > https://www.postgresql.org/message-id/CAHE3wgjX-N%3DX9mccp4Bs-y-%3DHE%3DqayHEpXpCs1jMT%3DW0ZsYc5A%40mail.gmail.com As I recall, handling of DROP events in the ddl_command_end event is not completely consistent. You may even find that some DROP events are not returned by pg_event_trigger_ddl_commands. I suggest you stick to pg_event_trigger_dropped_objects() instead (for which you need sql_drop). -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: ERROR: found multixact from before relminmxid
On 2018-Jul-13, Sean McIntyre wrote: > Quick correction :) > > I am running PostgreSQL *9.6.8* on AWS RDS. I'm going to try to pg-repack > the table and will look to upgrade to *9.6.9* (though it's not presently > available on RDS). Yay RDS ... -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: User documentation vs Official Docs
On 2018-Jul-16, Joshua D. Drake wrote: > Think of this (if we can figure out how to pull this off): User on > StackOverflow says, "How do I do X", someone answers with a direct > link to a recipe on PostgreSQL.Org that tells them exactly how to do X > (with caveats of course). There isn't much more user friendly than > that. Sounds like wiki pages could solve need this pretty conveniently. If and when the content is mature enough and migrates to the tutorial main documentation pages, the wiki pages can be replaced with redirects to those. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: User documentation vs Official Docs
On 2018-Jul-20, Adrian Klaver wrote: > On 07/20/2018 11:45 AM, Joshua D. Drake wrote: > > Back to the original idea, it would be great if those participating > > would be willing to help even a little in determining an actual > > direction to take this. > > I would say that discussion should take place in --docs: > > https://www.postgresql.org/list/pgsql-docs/ I don't see why we need this thread to continue. This sounds like somebody looking for a solution when they don't yet know what the problem is. If people want to contribute, there are already some places where they can do so. Articles can be drafted in the wiki initially or, heck, even sites like StackOverflow[1], and if something gets to a level so great that they think it should be enshrined in DocBook, they can turn it into a documentation patch. [1] for extra points, write in SO and then add a link to the question to FAQ in the wiki. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: In certain cases, can UPDATE transactions fail rather than block waiting for “FOR UPDATE lock”?
On 2018-Jul-20, Praveen Kumar wrote: > Regarding UPDATEs on rows that are locked via "SELECT FOR UPDATE" in > another transaction, I read the above as follows: other transactions that > attempt UPDATE of these rows will be blocked until the current transaction > ( which did "SELECT FOR UPDATE" for those rows ) ends, unless the columns > in these rows being UPDATE'ed are those that don't have a unique index on > them that can be used in a foreign key. > > Is this correct ? No. What it means that if you UPDATE the columns-indexed-by-unique-idx then the FOR UPDATE lock is acquired underneath. If your UPDATE modifies some other column, then a FOR NO KEY UPDATE lock is acquired instead. In both cases, concurrent transactions would be blocked rather than erroring out. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: logical replication snapshots
On 2018-Jul-24, Dimitri Maziuk wrote: > On 7/23/2018 6:51 PM, Andres Freund wrote: > > Could you describe what exactly you did into that situation? > > Created a database, few publications, as per TFM, and a few subscriptions on > another host. Went on vacation for 3 weeks. The problem host is centos > 7.current w/ postgres rpms from PGDG repo. Postgres is on port 5434 because > there is a stock one on 5432. Are you serious about us trying to diagnose a bug with this description? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Need text wrapping inside a column to avoid record wrapping (tabular output of 'select')
On 2018-Jul-25, David Gauthier wrote: > Hi: > > psql (9.6.7, server 9.1.9) on RHEL6 > > In order to avoid record wrapping in the tabular output of a "select" > statement, I need to limit the width of certain columns. For those > columns, I would like to have text wrapping so as not to lose any > information (IOW, I don't want to simply truncatate and lose info). \pset format wrapped It doesn't break on whitespace though. $ 999,00 │ Nine hundred ninety nine dollars and z… │…ero cents $ 1.000,…│ One thousand dollars and zero cents … 00 │ -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Template0 datfrozenxid age is 160million and progressing
On 2018-Aug-01, Andres Freund wrote: > On 2018-08-01 12:07:16 -0400, Tom Lane wrote: > > Andres Freund writes: > > > On 2018-08-01 10:24:24 -0400, Tom Lane wrote: > > >> IMO, the action you need to take is enabling autovacuum. We've > > >> seen many many people go down the path you are taking, and it's > > >> generally led to no good in the end. Manual vacuuming tends > > >> to miss stuff, and it cannot react adequately to activity spikes. > > > > > But it shouldn't matter here, autovacuum will start regardless, no? > > > > Sure, once it decides that emergency anti-wraparound vacuuming is > > necessary. I really doubt the OP wants that to happen; it's the > > exact opposite of non-intrusive. > > That's solely what would trigger it were autovacuum enabled, too? I've > complained about "emergency anti-wraparound" beeing anything but > emergency (they're largely unavoidable unless you manually script it), > but they're what happen once autovacuum_freeze_max_age is reached, and > that's the only trigger for vacuuming old relations independent of other > activity? With a small database like template0, it doesn't matter. The vacuuming is going to be over before OP realizes it has happened anyway. Certainly having it happen on a normal-sized table can become problematic, but presumably OP has taken steps to avoid it when disabling autovacuum (which is why only template0 is getting into trouble.) I think emergency vacuum should behave differently (not scan indexes, just apply HOT page prune and clear old XIDs/multixacts), which would make it much faster, but that's a separate line of thought (and of development). -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: ALTER TABLE .. SET STATISTICS
On 2018-Aug-05, Ron wrote: > For columns of type bytea which store image data (PDFs, JPGs, etc) would it > speed up the ANALYZE process to SET STATISTICS = 0? Yes. I'd suggest also looking at ALTER TABLE ... SET STORAGE to avoid having Postgres try to compress those columns, which is useless waste of CPU cycles, since those formats already compress. > That way, default_statistics_target could be cranked higher -- giving better > statistics for needed columns -- without polluting pg_statistics with > unneeded data? Right. With the default being 100 I don't think there's a real need to crank it higher server-wide, but it may be useful to crank it higher for specific columns. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Vacuum process waiting on BufferPin
On 2018-Aug-13, Don Seiler wrote: > This afternoon I discovered an autovacuum process that had been running for > over 6 days. It was waiting on BufferPin event. I kicked off a manual > vacuum+analyze of the table, which automatically killed that autovacuum. > This ran for a few minutes before it too was waiting on a BufferPin event. > I've never witnessed a vacuum session waiting on BufferPin before. Buffer pins are normally released quickly enough. Of course, an idle-in-transaction session has its own problems, but even those would normally not have buffer pins; so vacuum would run to completion without blocking, even if no tuples would be removed. Maybe you had a cursor that was not fully scanned before the session was left idle -- as I recall, those can leave buffers pinned. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: MultiXactId 24018 has not been created yet -- apparent wraparound error
On 2018-Aug-23, Dean Armada wrote: > My team is confused as we are not doing anything then we experienced this > in our staging environment. It's happening only in one of our tables. When > we try fetch in that table, the message below returns > > project=# SELECT * FROM table_table; > ERROR: MultiXactId 24018 has not been created yet -- apparent wraparound > > Please take note that I am using docker containers and my image is > postgres:9.6.3-alpine Quite a few bugs in this area have been fixed in releases after 9.6.3. I suggest an upgrade (9.6.10 is latest in that branch), and report back if it still happens. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Will there ever be support for Row Level Security on Materialized Views?
On 2018-Aug-27, Ken Tanzer wrote: >- Is there any reason there is no RLS / MV support now? Nobody worked on it yet. >- Are there any reason fundamental reasons why PG couldn't support RLS >for MVs? Doesn't look like it. >- Is it just a matter of nobody has done it yet? Yeah. >- Is there other development work that would need to happen before this >could be implemented? Doesn't look like it. >- In the scheme of things, is it a lot of work or not so much? Probably not much. >- Has there been any discussion of implementing this feature, and if so >is there a link to it? Not that I remember. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Autovacuum degrades all other operations by keeping all buffers dirty?
On 2018-Aug-31, David Pacheco wrote: > From reading the 9.6.3 source, it looks like the autovacuum process > itself is single-threaded, and it reads pages essentially linearly > from the relation (possibly skipping some). When the autovacuum > process needs to modify a page, it doesn't write it directly, but > rather marks the buffer dirty. The page will be written later, Unless there's some bug, there is a BufferAccessStrategy that only lets a few dozen buffers go unwritten before the autovac worker process itself is forced to write some. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: How to get shorter SERVER_VERSION in psql-prompt?
On 2018-Sep-03, Andreas Joseph Krogh wrote: > Hi all, I'm using the Ubuntu-packages and have this in my .psqlrc: > > \set PROMPT1 '%[%033[35m%]%n@%m:%>%[%033[0m%] %:server_version: > %[%033[32m%]%/%[%033[0m%]%R%# ' > > This results in this verver_version: > 10.5 (Ubuntu 10.5-1.pgdg18.04+1) > > Is it possible to adjust this somehow so it outputs only "10.5"? Well, where does that server_version come from? Try adding this somewhere to .psqlrc: select setting as server_version from pg_settings where name = 'server_version' \gset -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Sv: Re: How to get shorter SERVER_VERSION in psql-prompt?
On 2018-Sep-03, Andreas Joseph Krogh wrote: > select setting as server_version from pg_settings where name = > 'server_version'; > ┌──┐ > │ server_version │ > ├──┤ > │ 10.5 (Ubuntu 10.5-1.pgdg18.04+1) │ > └──┘ Ugh. (So this is coming from "configure --with-extra-version" stuff) I guess you could just split it out at the first whitespace ... -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Behaviour when autovacuum is canceled
On 2018-Sep-13, Martín Fernández wrote: > By performing this changes we are going to start relying more heavily > on the autovacuum work and the concern of "lost work" caused by > autovacuum canceling itself when locking contention happen showed up. > I'm guessing that we might be over thinking this and the canceling is > not going to happen as frequently as we think it will. Any DDL run on a table will cancel an autovacuum over that table (except for-wraparound autovacuums). If these are rare, you don't need to worry about that too much. If they are frequent enough that autovacuum will be cancelled regularly in one table, you'll be sad. If you're running vacuum by hand, you'd probably see your DDL blocking behind VACUUM, which would be very noticeable. I think if you don't have trouble today without having tuned the system carefully to avoid such trouble, you're not likely to have trouble with autovacuum either. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: How to extract information from pg_ddl_command type
On 2019-Jul-11, Thomas Kellerer wrote: > The function pg_event_trigger_ddl_commands() returns several columns, one of > them being "command" that is of the type "pg_ddl_command". > > The manual[1] describes this column as: > > > A complete representation of the command, in internal format. > > This cannot be output directly, but it can be passed to other > > functions to obtain different pieces of information about the > > command. > > However, I can not find any of those "other functions" to extract information > from that column. > > My goal is to get the complete SQL text that fired the event trigger - is > that even possible through pg_event_trigger_ddl_commands()? You need a C function. See the test module in src/test/modules/test_ddl_deparse -- it contains a very simple implementation that doesn't really do a lot, but maybe it helps. The real meat of the thing is in an outdated module I wrote for 9.5. I think the latest version of that was posted by Alexander Shulgin in 2015 here https://postgr.es/m/CACACo5Q_UXYwF117LBhjZ3xaMPyrgqnqE=mxvrhefjj51ac...@mail.gmail.com or maybe it's here https://postgr.es/m/cacaco5qquav+n4gi+ya1jf_a+qenr6sjup8cydpsrxka+fh...@mail.gmail.com -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: disable and enable trigger all when a foreign keys
On 2019-Jul-12, Tom Lane wrote: > The OP already found the best method to get out of this, which is to > drop and re-add the FK constraint. Adding the constraint will force > a full verification scan, which is what you need here since there's > no information available about which checks were missed. Maybe it would make sense to be able to mark the FK as not validated, so that you can apply ALTER CONSTRAINT ... VALIDATE. Seems better than DROP + ADD. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Why no CREATE TEMP MATERIALIZED VIEW ?
On 2019-Jul-16, Ivan Voras wrote: > Hello, > > Out of curiosity, since there's CREATE TEMP VIEW, any particular reason > there's no CREATE TEMP MATERIALIZED VIEW? Because it hasn't been implemented. There is a patch, but it's not done. See this thread: https://postgr.es/m/CAKLmikNoQR4ZNg_wt=-hu-uuxv2cxvdveersmfjr2saopvx...@mail.gmail.com -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Corrupt index stopping autovacuum system wide
On 2019-Jul-17, Peter Geoghegan wrote: > Maybe nbtree VACUUM should do something more aggressive than give up > when there is a "failed to re-find parent key" or similar condition. > Perhaps it would make more sense to make the index inactive (for some > value of "inactive") instead of just complaining. That might be the > least worst option, all things considered. Maybe we can mark an index as unvacuumable in some way? As far as I understand, all queries using that index work, as do index updates; it's just vacuuming that fails. If we mark the index as unvacuumable, then vacuum just skips it (and does not run phase 3 for that table), and things can proceed; the table's age can still be advanced. Obviously it'll result in more bloat than in normal condition, but it shouldn't cause the whole cluster to go down. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Corrupt index stopping autovacuum system wide
On 2019-Jul-17, Tom Lane wrote: > Alvaro Herrera writes: > > On 2019-Jul-17, Peter Geoghegan wrote: > >> Maybe nbtree VACUUM should do something more aggressive than give up > >> when there is a "failed to re-find parent key" or similar condition. > >> Perhaps it would make more sense to make the index inactive (for some > >> value of "inactive") instead of just complaining. That might be the > >> least worst option, all things considered. > > > Maybe we can mark an index as unvacuumable in some way? As far as I > > understand, all queries using that index work, as do index updates; it's > > just vacuuming that fails. If we mark the index as unvacuumable, then > > vacuum just skips it (and does not run phase 3 for that table), and > > things can proceed; the table's age can still be advanced. Obviously > > it'll result in more bloat than in normal condition, but it shouldn't > > cause the whole cluster to go down. > > If an index is corrupt enough to break vacuum, I think it takes a rather > large leap of faith to believe that it's not going to cause problems for > inserts or searches. Maybe, but it's what happened in the reported case. (Note Aaron was careful to do the index replacement concurrently -- he wouldn't have done that if the table wasn't in active use.) > I'd go with just marking the index broken and > insisting that it be REINDEX'd before we touch it again. This might make things worse operationally, though. If searches aren't failing but vacuum is, we'd break a production system that currently works. > (a) once the transaction's failed, you can't go making catalog updates; Maybe we can defer the actual update to some other transaction -- say register an autovacuum work-item, which can be executed separately. > (b) even when you know the transaction's failed, blaming it on a > particular index seems a bit chancy; Well, vacuum knows what index is being processed. Maybe you're thinking that autovac can get an out-of-memory condition or something like that; perhaps we can limit the above only when an ERRCODE_DATA_CORRUPTED condition is reported (and make sure all such conditions do that. As far as I remember we have a patch for this particular error to be reported as such.) > (c) automatically disabling constraint indexes seems less than desirable. Disabling them for writes, yeah. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Hardware for writing/updating 12,000,000 rows per hour
On 2019-Jul-26, Arya F wrote: > I think I can modify my application to do a batch update. Right now the > server has an HDD and it really can't handle a lot of updates and inserts > per second. Would changing to a regular SSD be able to easily do 3000 > updates per second? That's a pretty hard question in isolation -- you need to consider how many indexes are there to update, whether the updated columns are indexed or not, what the datatypes are, how much locality of access you'll have ... I'm probably missing some other important factors. (Of course, you'll have to tune various PG server settings to find your sweet spot.) I suggest that should be measuring instead of trying to guess. A reasonably cheap way is to rent a machine somewhere with the type of hardware you think you'll need, and run your workload there for long enough, making sure to carefully observe important metrics such as table size, accumulated bloat, checkpoint regime, overall I/O activity, and so on. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: DRY up GUI wiki pages
On 2019-Jul-10, Steve Atkins wrote: > > On Jul 10, 2019, at 7:38 PM, Bruce Momjian wrote: > >> I'd like to DRY them up so there aren't two lists which confuses > >> newcomers. Any objections? If not I'll probably make one of those > >> pages into GUI's and one into "non GUI's" or something like that. > > I created the first one because the second one was full of old, stale, > useless things. I believe that everything valid on the second one was > added to the first one at the time. I think we should delete the obsolete pages; maybe make them redirects to the ones that have been kept up to date. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Aggregate not using BRIN index on timestamp
On 2019-Aug-05, Jeremy Finzel wrote: > Thanks Tom. So, this is a very general question, but would it be possible > to develop that feature into BRIN, given what it stores? Even if it does > not have ordering information, doesn't it know which blocks would contain > the lowest values, so it could choose to do a "bitmap scan ordered sort" or > something, depending on the number of rows sought? Or is the problem that > it has no way of determining what value actually would be the "minimum" > without the query specifying a particular date, such as less than > "2013-04-01"? For btrees, we have planagg.c which transforms min() and max() into subqueries (SELECT .. WHERE ... ORDER BY .. LIMIT 1). In a BRIN index, you could execute the search by scanning the index to determine which ranges contain the least/greatest values, and then using a bitmap scan to scan those. I'm not sure that this is a transformation that can be applied cleanly, since that thing I describe doesn't look to be a "subquery". But maybe it can -- I think you'd need a special executor node. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Aggregate not using BRIN index on timestamp
On 2019-Aug-05, Tom Lane wrote: > FWIW, I suspect the hard part would be dealing with cases where the > extremal ranges (according to the index) contain no live tuples > (according to the query's snapshot). The btree case handles the > invisible-tuples problem by continuing a scan started at the index > endpoint until it finds a visible tuple --- which, in the worst case, > can take a long time. It's not obvious to me what you'd do with > BRIN. Hmm, yeah, that's a tough problem -- hadn't thought about that. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: PMChildFlags array
On 2019-Oct-03, bhargav kamineni wrote: > bhargav kamineni writes: > > Postgres was rejecting a bunch of connections from a user who is having a > > connection limit set. that was the the FATAL error that i could see in log > > file. > > FATAL,53300,"too many connections for role ""user_app""" > > > db=\du user_app > >List of roles > > Role name | Attributes | Member of > > --+---+ > > user_app | No inheritance +| {application_role} > > | 100 connections +| > > | Password valid until infinity | Was the machine overloaded at the time the problem occurred? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Pg11 -- MultiXactId xxxx has not been created yet -- apparent wraparound
On 2019-Oct-04, Moreno Andreo wrote: > select * from heap_page_items(get_raw_page('tablename',3159)); > select * from heap_page_items(get_raw_page('tablename',3160)); > > and so on for about 5 or 6 pages. Please paste the output of that for pages 3159 and 3160, as well as the output of pg_controldata. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Pg11 -- MultiXactId xxxx has not been created yet -- apparent wraparound
On 2019-Oct-04, Moreno Andreo wrote: > Il 04/10/19 17:30, Alvaro Herrera ha scritto: > > On 2019-Oct-04, Moreno Andreo wrote: > > > > > select * from heap_page_items(get_raw_page('tablename',3159)); > > > select * from heap_page_items(get_raw_page('tablename',3160)); > > > > > > and so on for about 5 or 6 pages. > > Please paste the output of that for pages 3159 and 3160, as well as the > > output of pg_controldata. > > > Thanks Alvaro, > you can find attached the data you requested Hmm, so it is tuple (3160,31) that's giving you grief -- it has xmax=12800 t_infomask=0x1103 (HEAP_XMAX_IS_MULTI | HEAP_XMIN_COMMITTED | others) Which is weird, since it has none of the locking bits. ... and also the valid range of multixacts as of the last checkpoint was: > NextMultiXactId dell'ultimo checkpoint: 366 > oldestMultiXID dell'ultimo checkpoint: 365 so the value 12800 is certainly not in range there. I wonder if it would work to just clear that multixact with SELECT ... WHERE ctid='(3160,31)' FOR UPDATE If this was in my hands, I would scan the WAL looking for the place that last touched this page (and the latest FPI for this page, also). It might have an explanation of what went on. Maybe use the page's LSN (from pageinspect's page_header()) as starting point for the WAL location that modified the page. I hope you have a WAL archive that goes back to well before the previous checkpoint. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Pg11 -- MultiXactId xxxx has not been created yet -- apparent wraparound
On 2019-Oct-04, Moreno Andreo wrote: > Il 04/10/19 18:28, Alvaro Herrera ha scritto: > > I wonder if it would work to just clear that multixact with > > SELECT ... WHERE ctid='(3160,31)' FOR UPDATE > select ...what? :-) Sorry but it's totally beyond my knowledge and my > control after resolving the issue i'll surely go and search docs to > understand what we've done This should do it: SELECT * FROM the_broken_table WHERE But of course I make no promise of it working or even having any effect at all ... > One thing I forgot to report is that this cluster is just upgraded from a > 9.1 that was crashing at least once a day (in many cases the upgrade itself > resolved the issue) > here's the log line > 2019-10-03 15:11:52 CEST LOG: server process (PID 18668) was terminated by > exception 0xC005 > In this case probably the access violation was due to a data corruption. > These are customer machines that are really badly kept and NTFS issues are > not that rare, so I won't bother investigating what's happened but just make > the customer up & running again. Hmm, well, it does sound like corrupted data, and if we suspect that that's the case then there's not much we can do other than clearing the page and moving on. That exception code is STATUS_ACCESS_VIOLATION. Old Postgres bugs caused that, many are fixed in current versions I think. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Pg11 -- MultiXactId xxxx has not been created yet -- apparent wraparound
On 2019-Oct-07, Moreno Andreo wrote: > Unfortunately, it didn't work :( > > db0=# select * from failing_table where ctid='(3160,31)' for update; > ERROR: MultiXactId 12800 has not been created yet -- apparent wraparound Oh well. It was a long shot anyway ... > Since the probability we are into corruption is very high, what if I \copy > all the table but the failing row(s) to an external file, drop and recreate > the table, and then \copy clean data back inside? Yes, that should work. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: OID out of range
On 2020-Jan-15, Konireddy Rajashekar wrote: > rangetest-# \d user_tables_sizes > Table "user_tables_sizes" >Column|Type | Collation | Nullable | Default > -+-+---+--+- > relid | numeric | | | Why do you have this column defined as numeric? It seems more sensible to have it as type oid; it's probably that column where you have the out-of-range values. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Unable to startup postgres: Could not read from file "pg_clog/00EC"
On 2020-Feb-05, Nick Renders wrote: > Is there anything specific I should check in our postgres installation / > database to make sure it is running ok now? Anyway to see what the > consequences were of purging that one pg_clog file? Losing pg_clog files is pretty bad, and should not happen; then again, this might have been something else (ie. the file was maybe not lost). That said, wrongly overwriting files is even worse. By zeroing an existing pg_clog file, you marked a bunch of transactions as aborted. Your data is now probably inconsistent, if not downright corrupt. I would be looking for my most recent backup ... If you're very lucky, your database might be pg_dumpable. I would try that, followed by restoring it in a separate clean instance and seeing what happens. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: How to fix 0xC0000005 exception in Postgres 9.0
On 2020-Feb-20, Andrus wrote: > > Upgrade to a version of Postgres that is not 5 years past EOL? > > Not possible at moment because requires lot of testing not to break existing > applications. > Planned in future. Applications are already broken, since they can't connect, right? Seems like a great time to test them with an upgraded server. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]
On 2020-Feb-25, Stanislav Motyčka wrote: > Sometimes (for tables with many columns) it would be better and easier > to write "SELECT" statement with clause "EXCEPT": > "SELECT * [EXCEPT col1 [,col2]] FROM ..." I think an important initial question is how do other database systems implement this functionality, if they do, and what syntax do they offer. >From there we can move on to the conversation of where is the ISO SQL committee going about this. I think it's good to extend the standard to some extent, but it would not do to have it extended in a direction that ends up contrary to what they pursue in the future. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Exportacion por lotes
On 2020-Mar-03, Hernan Jesus Gonzalez Carmona wrote: > Estimados antes que todo me presento, mi nombre es Hernan Gonzalez, me > acabo de inscribir en esta lista de correo y desde ya me disculpo si en > este mensaje violo alguna normativa de la lista de correo pero necesito > ayuda que me apura mucho. I replied in Spanish copying pgsql-es-ayuda. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Getting a error on creating a partition table index 12.2.
On 2020-Mar-05, Adrian Klaver wrote: > On 3/5/20 10:04 AM, nikhil raj wrote: > > *CREATE INDEX t_e20so1_doi_c_doid_idx > > ON public.t_e20so1_doi USING btree > > (i_doid ASC NULLS LAST) > > TABLESPACE pg_default;* > > *ERROR: cannot specify default tablespace for partitioned relations SQL > > state: 0A00.* > > Note:- but the same query is executed in the 12.1 version.its working fine. > > Well this ERROR appeared here: > > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;f=src/backend/commands/indexcmds.c;h=87259588d0ab0b8e742e30596afa7ae25caadb18 The reason for the error is the expectation that creating an index on a partitioned table with a tablespace specification will cause the children indexes (ie. the indexes on the partitions) to use the same tablespace. This does not work properly for the default tablespace, so I made that particular condition an error. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Getting a error on creating a partition table index 12.2.
On 2020-Mar-05, Adrian Klaver wrote: > The OP was wondering why it worked in 12.1 and not 12.2? > > I could not see any obvious reason for that, so: > > 1) There is a not obvious reason > > 2) It did not work in 12.1 either. (2) is correct. 55469 12.1 9913=# show server_version; server_version 12.1 (1 fila) 55469 12.1 9913=# create index on p (a) tablespace pg_default; ERROR: cannot specify default tablespace for partitioned relations -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: postgres: archiver process failed on 000000010000020A00000073
On 2020-Apr-01, Silvio Fabi - NBS srl wrote: > WAL Archive process on my DB Postgresql 10 is not working. > the WAL Archive file 0001020A0073 was accidentally deleted > and there is no backup. > Postgresql tries to run the following command: > rsync -a pg_wal/0001020A0073 > barman@database:/var/lib/barman/database/incoming/0001020A0073 Ouch. This is not a barman problem; accidentally a WAL file is synonymous with corrupting your database. Any existing backup that needs that file is now invalid. I suggest you run CHECKPOINT and then take a new backup. I think removing the 0001020A0073.ready file from pg_wal should prevent future attempts at archiving it, so that subsequent WAL files are archived. By all means, *do not* lose any more WAL files. It is a very dangerous thing to do. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: It is possible to force periodically switch xlog?
On 2020-Apr-02, Abdul Sayeed wrote: > Hi Edson, > > You can switch by using below command below PG 10. > > Select pg_switch_xlog(); > > And there is no parameter present to switch it. IIRC archive_timeout does precisely that. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: It is possible to force periodically switch xlog?
On 2020-Apr-02, Edson Richter wrote: > Actually, I do have a stand by server I wish to update a bit more > frequently in low traffic servers. It is better to use streaming replication for that. Using the WAL archive for that is an archaic mechanism now. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Detecting renamed columns via pgouput in logical replication ?
On 2020-Apr-27, Andreas Andreakis wrote: > Are there plans to add comprehensive schema change detection abilities via > logical replication ? either by embedding more information into pgoutput or > perhaps by embedding the schema DDLs ? There aren't any plans currently that I am aware of. There was a project to implement this a few years ago (part of which was mine) but it's not complete and I haven't heard of anyone working on completing it. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: How to use pg_waldump?
On 2020-May-01, André Hänsel wrote: > Paul Förster wrote: > > > maybe try with another WAL file or files? > > > > Works for me... > > Ok, I tried it with all the files in the pg_wal directory and it > worked with one: the first one (lexicographically/hex). The other files have not yet been written, so pg_waldump cannot read them. If you can find out what was the name of the file that it had before the recycling, you can rename (a copy of) the file to that, and then pg_waldump will be able to read it. (You can try brute-forcing the search for the correct file name). If you have an archive_command-generated archive of WALs, or pg_receivewal, those files should all be readable by pg_waldump. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Hash partitioning, what function is used to compute the hash?
On 2020-May-11, Michael Lewis wrote: > Afaik, hash partition doesn't have real world expected use cases just yet. I don't think I agree with this assertion. While I understand that there might be things still to do in this area (as everywhere else), it should certainly have its uses already. If you have a wish-list for hash partitioning to become usable for you, would you please list the features you wish it'd have? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Firing an event trigger for ADD COLUMN?
On 2020-May-11, Michel Pelletier wrote: > Is this asymmetry a bug? I realize these event trigger functions are > typically written in C, but I'd like to keep this idea contained to a > plpgsql function and cover as many reversible changes as I can. Sadly, you cannot obtain all the necessary details using only plpgsql. The only way is to compile a C extension. There's a very rough, incomplete skeleton to show how to do this in src/test/modules/test_ddl_deparse. A much more complete approach, which was not completed, was used to produce JSON output from the C structures -- see https://www.postgresql.org/message-id/20150409161419.gc4...@alvh.no-ip.org Keep in mind ALTER TABLE is a very sophisticated; you can do things like ALTER TABLE one ADD COLUMN two INT, DROP COLUMN three, ADD COLUMN four text; (i.e. do several things in one command) so if you were to use a simplistic approach, you might end up missing things. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Firing an event trigger for ADD COLUMN?
On 2020-May-11, Alvaro Herrera wrote: > https://www.postgresql.org/message-id/20150409161419.gc4...@alvh.no-ip.org (Please don't expect this code to compile anymore.) -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: pg_dump of database with numerous objects
On 2020-May-31, t...@exquisiteimages.com wrote: > I am now needing to upgrade to a new version of PostgreSQL and I am running > into problems when pg_upgrade calls pg_dump. pg_dump stalled at: "pg_dump: > saving database definition" for 24 hours before I killed the process. > > My pg_class table contains 9,000,000 entries and I have 9004 schema. We've made a number of performance improvements to pg_dump so that it can dump databases that are "large" in several different dimensions, but evidently from your report it is not yet good enough when it comes to dumping millions of tables in thousands of schemas. It will probably take some profiling of pg_dump to figure out where the bottleneck is, and some careful optimization work in order to make it faster. Not a weekend job, I'm afraid :-( -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Oracle vs. PostgreSQL - a comment on Mysql
On 2020-Jun-03, Martin Mueller wrote: > On the topic of what other databases do better: I much prefer Postgres to > Mysql because it has better string functions and better as well as very > courteous error messages. But MySQL has one feature that sometimes makes me > want to return it: it stores the most important metadata about tables in a > Mysql table that can be queried as if it were just another table. That is a > really feature. I makes it very easy to look for a table that you edited most > recently, including a lot of other things. > > Why doesn’t Postgres have that feature? Or is there a different and equally > easy way of getting at these things that I am just missing? Every little schema detail in Postgres is in a catalog table that you can query. See pg_class for a list of relations; pg_attribute for attributes; and so on. You can learn a lot about them just by running "psql -E" and executing \d -- look at the queries that appear above the resultset. We even have a whole section in our docs about the layout of the system catalogs. Also, there are views that make the whole thing easier. See https://www.postgresql.org/docs/devel/catalogs.html -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: suggestion/question pgpass alias?
On 2020-Jun-11, Caleb Cushing wrote: > is it possible to support an alias for a pgpassfile? what I'm looking to do > is > > psql --alias company -h localhost > > and not type anything else and get a shell. if this is not possible could > this be added? I think what you want is a "service" file. See https://www.postgresql.org/docs/current/libpq-pgservice.html -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: suggestion: psql configs in .config
On 2020-Jun-11, Tom Lane wrote: > I'm finding it hard to see a compelling reason why that would be an > improvement. It would make psql randomly different from 95% of the > other programs that have similar config files. Or, if everybody > somehow got on board with "let's put config files in ~/.config/", > then there'd be clutter in that directory ... and we're back to > square one. Actually, they do: this has been kind-of standardized by freedesktop as XDG_CONFIG_HOME -- see https://specifications.freedesktop.org/basedir-spec/latest/ar01s03.html My own ~/.config has 84 subdirs and files. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: PostgreSQL 12 - ERROR: could not rename file "pg_logical/snapshots/1A-7C00D890.snap.13594.tmp" to "pg_logical/snapshots/1A-7C00D890.snap": No space left on device
On 2020-Jun-30, FOUTE K. Jaurès wrote: > Hi everyone, > > I am use a native logical replication in PostgreSQL 12 on Ubuntu 16.04 and > all is working fine until i faced this error below. > > ERROR: could not rename file > "pg_logical/snapshots/1A-7C00D890.snap.13594.tmp" to > "pg_logical/snapshots/1A-7C00D890.snap": No space left on device What are your fsync and wal_sync_method settings? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: invalid non-zero objectSubId for object class
On 2020-Jul-09, Michel Pelletier wrote: > Hi Tom, thanks for getting back so quick: > > I don't seem to have either: > > dev=> select * from pg_depend where classid = 297108 or refclassid = 297108; > classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype > -+---+--++--+-+- > (0 rows) Maybe try looking in pg_shdepend? Or pg_description? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: invalid non-zero objectSubId for object class
On 2020-Jul-09, Michel Pelletier wrote: > I restored a snapshot and I can drop the tables there, so we'll likely > proceed to swap the replicas over tomorrow. I have this corrupted > instance i can continue to debug on if necessary. There seem to be some > other issues now that we're investigating, like a max(timestamp) query on > the old instance is doing a seqscan on all the partitions instead of using > an index that's there, the new instance from the snapshot correctly uses > the index, so I suspect further corruption. This being RDS makes it very difficult to debug. For example I would suggest a debugger to find the problem, but you don't have access to that. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: bad JIT decision
On 2020-Jul-24, Andres Freund wrote: > I think the issue is more that we need to take into accoutn that the > overhead of JITing scales ~linearly with the number of JITed > expressions. And that's not done right now. I've had a patch somewhere > that had a prototype implementation of changing the costing to be > #expressions * some_cost, and I think that's a lot more accurate. I don't quite understand why is it that a table with 1000 partitions means that JIT compiles the thing 1000 times. Sure, it is possible that some partitions have a different column layout, but it seems an easy bet that most cases are going to have identical column layout, and so tuple deforming can be shared. (I'm less sure about sharing a compile of an expression, since the varno would vary. But presumably there's a way to take the varno as an input value for the compiled expr too?) Now I don't actually know how this works so please correct if I misunderstand it. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: bad JIT decision
On 2020-Jul-27, Scott Ribe wrote: > > On Jul 27, 2020, at 4:00 PM, Alvaro Herrera > > wrote: > > > > I don't quite understand why is it that a table with 1000 partitions > > means that JIT compiles the thing 1000 times. Sure, it is possible that > > some partitions have a different column layout, but it seems an easy bet > > that most cases are going to have identical column layout, and so tuple > > deforming can be shared. (I'm less sure about sharing a compile of an > > expression, since the varno would vary. But presumably there's a way to > > take the varno as an input value for the compiled expr too?) Now I > > don't actually know how this works so please correct if I misunderstand > > it. > > I'm guessing it's because of inlining. You could optimize a function > that takes parameters, no problem. But what's happening is inlining, > with parameters, then optimizing. Are you saying that if you crank jit_inline_above_cost beyond this query's total cost, the problem goes away? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: 12.3 replicas falling over during WAL redo
On 2020-Aug-01, Ben Chobot wrote: > We have a few hundred postgres servers in AWS EC2, all of which do streaming > replication to at least two replicas. As we've transitioned our fleet to > from 9.5 to 12.3, we've noticed an alarming increase in the frequency of a > streaming replica dying during replay. Postgres will log something like: > > |2020-07-31T16:55:22.602488+00:00 hostA postgres[31875]: [19137-1] db=,user= > LOG: restartpoint starting: time 2020-07-31T16:55:24.637150+00:00 hostA > postgres[24076]: [15754-1] db=,user= FATAL: incorrect index offsets supplied > 2020-07-31T16:55:24.637261+00:00 hostA postgres[24076]: [15754-2] db=,user= > CONTEXT: WAL redo at BCC/CB7AF8B0 for Btree/VACUUM: lastBlockVacuumed 1720 > 2020-07-31T16:55:24.642877+00:00 hostA postgres[24074]: [8-1] db=,user= LOG: > startup process (PID 24076) exited with exit code 1| I've never seen this one. Can you find out what the index is being modified by those LSNs -- is it always the same index? Can you have a look at nearby WAL records that touch the same page of the same index in each case? One possibility is that the storage forgot a previous write. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: 12.3 replicas falling over during WAL redo
On 2020-Aug-03, Ben Chobot wrote: > rmgr: Btree len (rec/tot): 72/ 72, tx: 76396065, lsn: > A0A/AC4204A0, prev A0A/AC420450, desc: INSERT_LEAF off 48, blkref #0: rel > 16605/16613/60529051 blk 6501 > > So then I did: > > /usr/lib/postgresql/12/bin/pg_waldump -p /var/lib/postgresql/12/main/pg_wal/ > 00010A0A00AB 00010A0A00AD | grep > 16605/16613/60529051 Yep. Looking at the ones in block 6501, > rmgr: Btree len (rec/tot): 72/ 72, tx: 76393394, lsn: > A0A/AB2C43D0, prev A0A/AB2C4378, desc: INSERT_LEAF off 41, blkref #0: rel > 16605/16613/60529051 blk 6501 > rmgr: Btree len (rec/tot): 72/ 72, tx: 76396065, lsn: > A0A/AC4204A0, prev A0A/AC420450, desc: INSERT_LEAF off 48, blkref #0: rel > 16605/16613/60529051 blk 6501 My question was whether the block has received the update that added the item in offset 41; that is, is the LSN in the crashed copy of the page equal to A0A/AB2C43D0? If it's an older value, then the write above was lost for some reason. > pg_waldump: fatal: error in WAL record at A0A/AC5411B0: invalid resource > manager ID 110 at A0A/AC5411E0 > > ...and I have no idea what I'm looking at. I assume/hope the error at the > end is due to the db shutting down, and nothing to be particularly worried > about? Yeah. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: 12.3 replicas falling over during WAL redo
On 2020-Aug-03, Ben Chobot wrote: > Alvaro Herrera wrote on 8/3/20 12:34 PM: > > On 2020-Aug-03, Ben Chobot wrote: > > > > Yep. Looking at the ones in block 6501, > > > > > rmgr: Btree len (rec/tot): 72/ 72, tx: 76393394, lsn: > > > A0A/AB2C43D0, prev A0A/AB2C4378, desc: INSERT_LEAF off 41, blkref #0: rel > > > 16605/16613/60529051 blk 6501 > > > rmgr: Btree len (rec/tot): 72/ 72, tx: 76396065, lsn: > > > A0A/AC4204A0, prev A0A/AC420450, desc: INSERT_LEAF off 48, blkref #0: rel > > > 16605/16613/60529051 blk 6501 > > My question was whether the block has received the update that added the > > item in offset 41; that is, is the LSN in the crashed copy of the page > > equal to A0A/AB2C43D0? If it's an older value, then the write above was > > lost for some reason. > > How do I tell? You can use pageinspect's page_header() function to obtain the page's LSN. You can use dd to obtain the page from the file, dd if=16605/16613/60529051 bs=8192 count=1 seek=6501 of=/tmp/page.6501 then put that binary file in a bytea column, perhaps like create table page (raw bytea); insert into page select pg_read_binary_file('/tmp/page'); and with that you can run page_header: create extension pageinspect; select h.* from page, page_header(raw) h; -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: 12.3 replicas falling over during WAL redo
On 2020-Aug-03, Ben Chobot wrote: > Alvaro Herrera wrote on 8/3/20 2:34 PM: > > On 2020-Aug-03, Ben Chobot wrote: > > dd if=16605/16613/60529051 bs=8192 count=1 seek=6501 of=/tmp/page.6501 > > If I use skip instead of seek Argh, yes, I did correct that in my test and forgot to copy and paste. > lsn | checksum | flags | lower | upper | special | pagesize | > version | prune_xid > --+--+---+---+---+-+--+-+--- > A0A/99BA11F8 | -215 | 0 | 180 | 7240 | 8176 | 8192 > | 4 | 0 > > As I understand what we're looking at, this means the WAL stream was > assuming this page was last touched by A0A/AB2C43D0, but the page itself > thinks it was last touched by A0A/99BA11F8, which means at least one write > to the page is missing? Yeah, that's exactly what we're seeing. Somehow an older page version was resurrected. Of course, this should never happen. So my theory has been proved. What now? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: 12.3 replicas falling over during WAL redo
On 2020-Aug-03, Alvaro Herrera wrote: > > lsn | checksum | flags | lower | upper | special | pagesize | > > version | prune_xid > > --+--+---+---+---+-+--+-+--- > > A0A/99BA11F8 | -215 | 0 | 180 | 7240 | 8176 | 8192 > > | 4 | 0 > > > > As I understand what we're looking at, this means the WAL stream was > > assuming this page was last touched by A0A/AB2C43D0, but the page itself > > thinks it was last touched by A0A/99BA11F8, which means at least one write > > to the page is missing? > > Yeah, that's exactly what we're seeing. Somehow an older page version > was resurrected. Of course, this should never happen. ... although, the block should have been in shared buffers, and it is there that the previous WAL record would have updated -- not necessarily flushed to disk. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Server goes to Recovery Mode when run a SQL
On 2020-Aug-14, PegoraroF10 wrote: > CREATE FUNCTION public.fntextonumero(ftext text, finteger public.i32) How is public.i32 defined? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Reconstructing transaction content after the fact
On 2020-Sep-02, Tom Lane wrote: > Lionel Bouton writes: > > Are these t_xmin values ever cleaned up (by VACUUM or another mechanism) > > ? If positive is there a way to configure the approximate time during > > which these values can be recovered ? > > See VACUUM FREEZE. You couldn't hide connections immediately after > insertion, but if the idea is to sanitize every so often, it'd help. Starting with 9.4 which introduced the use of HEAP_XMIN_FROZEN combination in infomask to replace rewriting the xmin value proper, the Xids will be preserved by freezing, so that won't help. One option to hide the xids might be to recreate the tables every once in a while, with something like BEGIN; LOCK TABLE votes; DROP TABLE IF EXISTS votes_copy; CREATE TABLE votes_copy AS SELECT * FROM votes; DROP TABLE votes; ALTER TABLE votes_copy RENAME TO votes; -- recreate indexes, if any? COMMIT; which will make all rows have the same Xmin. Since the voting process involves a human act and the tables are not expected to be enormous, it might not be totally out of the question to do this after every vote, or in the worst case, once every minute or so. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: PKEY getting corrupted
Danny, On 2020-Sep-10, Abraham, Danny wrote: > We have seen several times a situation where a PKEY is compromised and > duplicate values are created within a table. Yeah, this is not unheard of. > This has happened so far on PG 928 on Linux and Windows, and also on > PG955 on AIX. You do realize that 9.2.8 is not only missing the many bugfixes that happened until 9.2.24 was released in November 2017, but also any bugfixes that were done after that branch was abandoned? 9.5.5 sounds more interesting as a target, but it's still missing all updates until 9.5.23 released last month. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Writing WAL files
Hello Robert On 2020-Oct-04, Robert Inder wrote: > One of the things I like about the old PGSQL 9 setup is that it generates > and ships a WAL file every few minutes, even if nothing has happened in the > database. > I find it re-assuring to be able to see WAL files arriving and being > processed without problem even when the live system was idle. > > But I cannot get PGSQL 12 to do this. It only writes (and thus ships) WAL > files when something happens in the database. > If the database is idle, it simply does not write any WAL files. This is on purpose; archiving WAL files that contain nothing is pure wastage of good electrons. I suggest that in PG12 you can monitor the "lag" of a standby server more directly by looking at columns write_lag, flush_lag, replay_lag in the pg_stat_replication view. (You'll need to change your configuration so that it uses streaming replication instead of pg_standby and rsync, but that's far more convenient so it's a good change anyway.)
Re: UUID with variable length
On 2020-Oct-15, Dirk Krautschick wrote: > Hi, > > I have here a situation with the usage of UUID. Here the database user > allows UUIDs with less then 16 byte lengths (please don't ask :-) ). > > Of course there are some technical ways to do the filling of the not > used bytes but I hope there is a better solution. This UUID is used as > primary Key and for indexing. How much shorter than 16 bytes are you expecting your UUIDs to be? If they're not short enough, you'll still have to store a lot of padding space, and considering that you'll need a length indicator if you use variable length, it does not really sound like you'll save much actual space. And you'll definitely be getting a slower datatype, since doing operations will become more complex. If this were me, I would see about zeroing out the unused bytes and not waste a lot of developer time on this.
Re: Reference-Partitioned Tables
On 2020-Nov-06, Сергей _ wrote: > https://docs.oracle.com/database/121/VLDBG/GUID-00923EB3-05F6-41F7-8437-E42FC9BD9571.htm#VLDBG1093 > > There is a widespread model when data related to properties of a superclass > is stored in a base table, and data related to subclasses in a detail table. > Or > when the main entity has volatile additional parameters. A very simplified > example in the attached image. Sounds easier to achieve by using a JSONB column that holds all the contract details. > The most successful choice of partitioning table *Contracts* is sectioning > by list with key column *ContractTypeID*. It would be nice if the table > *ContractDetails* is partitioned automatically like the parent table > *Contracts*. Then we don't have to add a crutch column *ContractTypeID* to > the table* Contract**Details* and partition manually in sync with > *Contracts*. This field also consumes disk space, since the table of details > is usually large and there can be more than one partitioning key. Hmm, so you want to partition a table based on values appearing in another table. I wouldn't hold my breath waiting for this.
Re: New "function tables" in V13 documentation
On 2020-Nov-08, Adrian Klaver wrote: > On 11/8/20 1:57 PM, Thomas Kellerer wrote: > > In case someone is interested: there is a little discussion going on on > > Reddit whether the new format of presenting functions in V13 is a step > > backwards: > > > > > > https://www.reddit.com/r/PostgreSQL/comments/jpi0rp/does_anyone_else_feel_like_the_v13_docs_are_a/ > > Yeah, I would agree with the mobile first design comments. Then again that > plague is hitting most sites these days. My 2 cents is it is a step > backwards. You can cover more ground quickly and digest it faster in the old > format. The person who made that comment retracted later. If you have suggestion on how to improve the new format, I'm sure we can discuss that. It seems pretty clear to me that we're not going back to the old format.
Re: New "function tables" in V13 documentation
On 2020-Nov-09, Adrian Klaver wrote: > > If you have suggestion on how to improve the new format, I'm sure we can > > discuss that. It seems pretty clear to me that we're not going back to > > the old format. > > Improve it by going back to old format. Not sure why that is not open to > discussion? Because the old format had problems.
Re: psql backward compatibility
On 2020-Nov-18, Stephen Haddock wrote: > Hello, > > When upgrading an older version of postgres, version 8.4 for example, to a > newer version such as 9.6, does the data have to be migrated immediately? As others have said: yes. > It looks like the recommended method is to dump the data, upgrade, > initialize a new cluster, and then restore the dumped data into the newer > version. Actually, you can also use pg_upgrade, which might be more convenient, particularly if your database is large.
Re: Potential BRIN Index Corruption
On 2020-Nov-26, Huan Ruan wrote: > Hi All > > We cannot work out a reproducible case but we have a copy of the offending > database. I was hoping to know I think the way to debug this would be to see what WAL records have been emitted for the index, using pageinspect to find the problem index tuple. Use 'select ctid rrom large_table_with_623m_records where ...' to pinpoint the unindexed tuple's page number; see when (in LSN) was that tuple written; inspect WAL surroundings looking for updates (or lack thereof) for the BRIN index. Use pageinspect to examine raw brin data.
Re: Deleting takes days, should I add some index?
On 2020-Nov-27, Alexander Farber wrote: > Referenced by: > TABLE "words_chat" CONSTRAINT "words_chat_gid_fkey" FOREIGN KEY (gid) > REFERENCES words_games(gid) ON DELETE CASCADE > TABLE "words_moves" CONSTRAINT "words_moves_gid_fkey" FOREIGN KEY (gid) > REFERENCES words_games(gid) ON DELETE CASCADE > TABLE "words_scores" CONSTRAINT "words_scores_gid_fkey" FOREIGN KEY > (gid) REFERENCES words_games(gid) ON DELETE CASCADE Make sure you have indexes on the gid columns of these tables. Delete needs to scan them in order to find the rows that are cascaded to. > So I ctrl-c (surprisingly not a single record was deleted; I was expecting > at least some to be gone) Ctrl-C aborts the transaction, so even though the rows are marked deleted, they are so by an aborted transaction. Therefore they're alive.
Re: Do we need a way to moderate mailing lists?
On 2021-Jan-16, Hemil Ruparel wrote: > Okay. I will not reply to them. Enough mental cycles wasted One way you could help, is by learning what top-posting is, learning not to do it, and teaching others the same. Same with not quoting entire messages on reply. -- Álvaro Herrera Valdivia, Chile