Re: Specific query taking time to process
> > There is a specific search query I am running to get list of Documents and > their metadata from several table in the DB. > We are running Postgres 9.6.9 on Amazon RDS (db.m5.4xlarge instance) > > Our current DB consists of 500GB of data and indexes. Most of the rows in > table are consist of 454,078,915 > > With the fresh DB with the restore of the DATA without any indexes Search > query performs relatively quick and most of the time its less than a > second. > > But after 3 weeks of use of the DB it sudenly started to slowdown only for > this perticular query and it takes 20+ seconds to respond. If I do a > restore the DB again then it continues to work fine and the symptom pops > out after 3 weeks time. > You haven't been quite clear on the situation and your use case, but assuming this table has 454 million rows and experiences updates/deletes then this sounds like you may be having problems with autovacuum. Have you customized parameters to ensure it is running more frequently than default? How are you doing those data restores? Perhaps that process is cleaning up the accumulated bloat and you can run fine again for a while. Check pg_stat_user_tables for the last (auto)vacuum that ran, assuming you didn't just restore again and are expecting the issue to occur again soon.
Re: Logical replication performance
On Fri, Nov 29, 2019 at 11:06 AM Florian Philippon < [email protected]> wrote: > > We tried another solution: we loaded a minimal schema (without indexes and > constraints) on the subscriber and created the subscription. The initial > copy phase was way faster (a few hours). Then we created indexes and > constraints. Is this a suitable solution for production? > This is probably not suitable for production. Once the COPY is finished, it still has to replicate row-by-row changes to the table rows which occurred since the starting COPY snapshot. UPDATEs and DELETEs will probably fail due to the lack of indexes on the “replica identity” columns. This failure will make the entire transaction, including the COPY, roll back to beginning. So you there will be no point at which you can build the missing indexes without first losing all the work that was done. If the master was quiescent (at least in regards to UPDATEs and DELETEs) then it there will be no row-by-row changes to apply between the start of the COPY and the start of transactional replication. In that case, the COPY will have committed before the system discovers the problem with the “replica identity”, giving you an opportunity to go build the index without losing all of the work. > Will the logical replication flow be buffered by the replication slots > during index creation and get in sync afterwards or will it conflict due to > locking issues? > It can't buffer in the middle of the transaction which includes the initial COPY. Cheers, Jeff
RE: Legal disclaimers on emails to this group
I apologize for the legalese; as others have suggested it’s corporate IT policy and I have no control over it. I certainly intended no offense to the community here. I will use my personal email for future inquiries on this mailing list. Thanks, Mike From: Craig James [mailto:[email protected]] Sent: Friday, December 06, 2019 1:42 PM To: Mike Schanne Cc: [email protected] Subject: Legal disclaimers on emails to this group (I've changed the original subject, "autovacuum locking question", of the sender's email so as not to hijack that thread.) On Thu, Dec 5, 2019 at 2:26 PM Mike Schanne mailto:[email protected]>> wrote: Hi, I am investigating a performance problem... ... This email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited. Sorry to be off topic, but this bugs me. Language is important. This isn't directed at you specifically, but I see these disclaimers all the time. How can you post to a public newsgroup that automatically reproduces your email to thousands of subscribers, and additionally publishes it on publicly accessible archives, in direct conflict with your company's policy appended to your email? And why on Earth do your company's lawyers think this sort of disclaimer is helpful and even legally useful? Not to mention, do they realize it's vaguely offensive to every customer and colleague who receives it? Craig This email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.
RE: unexpected result for wastedbytes query after vacuum full
Yes, the additional bitmap could certainly explain the increase. Thanks, Mike -Original Message- From: Justin Pryzby [mailto:[email protected]] Sent: Friday, December 06, 2019 6:29 PM To: Mike Schanne Cc: [email protected] Subject: Re: unexpected result for wastedbytes query after vacuum full On Fri, Dec 06, 2019 at 05:18:20PM +, Mike Schanne wrote: > Hi all, > > This question is somewhat related to my previous question: > https://www.postgresql.org/message-id/0871fcf35ceb4caa8a2204ca9c38e330%40USEPRDEX1.corp.kns.com > > I was attempting to measure the benefit of doing a VACUUM FULL on my > database. I was using the query found here: > https://wiki.postgresql.org/wiki/Show_database_bloat > > However, I got an unexpected result in that the "wastedbytes" value actually > increased for some tables after doing the vacuum. > I was wondering if the fact that we use a json column could be interfering > with the wastedbytes calculation. Can anyone explain how wastedbytes could > increase from a vacuum? Is it due to dropped columns, like Tom explained here ? https://www.postgresql.org/message-id/18375.1520723971%40sss.pgh.pa.us This email is non-binding, is subject to contract, and neither Kulicke and Soffa Industries, Inc. nor its subsidiaries (each and collectively “K&S”) shall have any obligation to you to consummate the transactions herein or to enter into any agreement, other than in accordance with the terms and conditions of a definitive agreement if and when negotiated, finalized and executed between the parties. This email and all its contents are protected by International and United States copyright laws. Any reproduction or use of all or any part of this email without the express written consent of K&S is prohibited.
