Re: Postgres calendar?
+1. On Tue, Oct 4, 2022 at 5:02 PM Bruce Momjian wrote: > Would people be interesting in subscribing to a Postgres calendar that > includes dates for minor releases, final minor release dates for major > versions, commit fests, and even Postgres events? For example, it could > include information from: > > https://www.postgresql.org/developer/roadmap/ > https://www.postgresql.org/support/versioning/ > https://commitfest.postgresql.org/ > https://www.postgresql.org/about/events/ > > We could even add information about beta, release candidate, and final > major releases, though the final release dates are usually not public. > > This could be done in Google Calendar, with an exported ICS file, or via > a dedicated ICS file. I could even automate it by scraping our website. > > -- > Bruce Momjian https://momjian.us > EDB https://enterprisedb.com > > Indecision is a decision. Inaction is an action. Mark Batterson > > > >
Off-topic? How to extract database statements from JPA?
I apologize if this is off-topic, but I've become exceedingly frustrated and need help. The assignment is to evaluate SQL code for an application that has been built with Java Springer Persistence API and, if appropriate, offer suggestions to improve database performance. The problem is that I am not a Java guy. I can usually read code and determine what it is trying to do. But here, I am at a loss. Where does the JPA hide the SQL code? I do not really expect a definitive, explicit answer, but if anybody could point me to documentation or a working aid that lays out where the JPA stores the DDL and DML, I would be truly grateful. Thanks in advance. Gus
Re: Effects of dropping a large table
Isn’t this a perfect opportunity to use the TRUNCATE command to quickly remove the data? And follow up by deleting the now empty tables?Regards,GusSent from my iPhoneOn Jul 19, 2023, at 7:33 PM, Rob Sargent wrote: On 7/19/23 17:15, David Rowley wrote: On Wed, 19 Jul 2023 at 07:41, Rob Sargent wrote: You might consider deleting portions of the table in separate (consecutive) batches (maybe 5% per delete). And then truncate table is not logged so that might be an alternative. Can you explain why this would be a useful thing to do? It sounds to me like it would just create a load of needless WAL from the deletes and the vacuum that cleans up the dead rows each of which is more likely to cause lag problems on the replica servers, which the OP is trying to avoid. David No, you're right. I was remembering problems with _deleting_ essentially all of a large table (with limited resources). The drop might not have the same problem. But aren't they logged/transactional and then in the WALs anyway.
Re: Effects of dropping a large table
Ah! Truncating a table does not entail all of WAL processes. From the documentation, "TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables." https://www.postgresql.org/docs/14/sql-truncate.html Regards, Gus On Sun, Jul 23, 2023 at 5:51 AM Peter J. Holzer wrote: > On 2023-07-22 16:37:39 -0400, Gus Spier wrote: > > Isn’t this a perfect opportunity to use the TRUNCATE command to > > quickly remove the data? And follow up by deleting the now empty > > tables? > > What's the advantage of first truncating and then deleting a table over > just deleting it? > > hp > > -- >_ | Peter J. Holzer| Story must make more sense than reality. > |_|_) || > | | | h...@hjp.at |-- Charles Stross, "Creative writing > __/ | http://www.hjp.at/ | challenge!" >
AWS RDS Postgres and the DBA: Which/how many aws permissions/access do we really need?
So, I'm looking for advice here. Can anyone recommend a list of useful/required AWS RDS permissions for a pro-active DBA? We are taking custody of a set of databases that will need sane backup and recovery plans; table partitioning; undiscovered postgres extensions and we don't yet know what else. RDSADMIN is out of the question, even though it would be the one-stop shop for managing the clusters and databases. But, unless I have to, I do not care to administrate by trial and error ((tripping over each mine in the field.) Any advice would be appreciated. Thanks, Gus
page is not marked all-visible but visibility map bit is set in relation "pg_statistic"
AWS RDS Postgres Aurora version 14.4 Error log shows: page is not marked all-visible but visibility map bit is set in relation "pg_statistic" To me, that sounds ominous. But, there does not appear to be any great performance hit. The applications are chugging along nicely. The end-users have not yet gathered at my door, waving torches and pitch-forks. What is the correct course of action in this case? Regards, Gus Spier
Foreign Data Wrappers
I find myself in new territory, Foreign Data Wrappers (FDW). I stumble closer to success through the documentation, youtube videos, and various google resources. But I've come across a concept that intrigues me very deeply. If I understand the concepts correctly, FDW not only makes other databases available, FDW also offers access to .csv files, plain text, or just about anything that can be bullied into some kind of query-able order. Has anyone ever tried to connect to redis or elasticache? If so, how did it work out? Regards, Gus -- Gus gus.sp...@gmail.com 540-454-3074 “Characteropathic individuals adopt ideologies created by doctrinaire, often schizoidal people, recast them into an active propaganda form, and disseminate it with their characteristic pathological egotism and paranoid intolerance for any philosophies which may differ from their own.” (*Political Ponerology*, Andrew Lobaczewski, 1984)
Re: Foreign Data Wrappers
Thanks for the pointers! I'll resume my task in the morning. If I get anywhere, I will post the outcome. R, Gus On Fri, Sep 6, 2024 at 9:35 PM Tom Lane wrote: > Gus Spier writes: > > If I understand the concepts correctly, FDW not only makes other > databases > > available, FDW also offers access to .csv files, plain text, or just > about > > anything that can be bullied into some kind of query-able order. Has > anyone > > ever tried to connect to redis or elasticache? If so, how did it work > out? > > Looks like it's been done: > > https://wiki.postgresql.org/wiki/Foreign_data_wrappers > > (No warranty expressed or implied on the quality of these > particular FDWs.) > > regards, tom lane >
Re: Can we get rid of repeated queries from pg_dump?
You guys are brilliant! Regards, Gus On Sat, Aug 28, 2021 at 6:26 PM Tom Lane wrote: > Here is a second patch, quite independent of the first one, that > gets rid of some other repetitive queries. On the regression database, > the number of queries needed to do "pg_dump -s regression" drops from > 3260 to 2589, and on my machine it takes 1.8 sec instead of 2.1 sec. > > What's attacked here is a fairly silly decision in getPolicies() > to query pg_policy once per table, when we could do so just once. > It might have been okay if we skipped the per-table query for > tables that lack policies, but it's not clear to me that we can > know that without looking into pg_policy. In any case I doubt > this is ever going to be less efficient than the original coding. > > regards, tom lane > >
Re: Is there any technical reason why "alter table .. set not null" can't use index?
Would it help to create a new not null column in the target table, and then update the table by copying values from old column to the new, not null column? Of course you’d have to ignore errors, etc. but wouldn’t that perform at enough for your needs? Sent from my iPhone > On Sep 8, 2021, at 1:15 AM, hubert depesz lubaczewski > wrote: > > On Wed, Sep 08, 2021 at 07:09:31AM +0200, Alexander Kukushkin wrote: >> Hi, >> >>> On Wed, 8 Sep 2021, 06:59 hubert depesz lubaczewski, >>> wrote: >>> >>> Hi, >>> we needed recently to add not null constraint on some fields, and it >>> struck me that it took long. >>> Obviously - pg has to check the data. But it seems that it can't use >>> index. >>> >> >> It can't use the index, but can use an already existing CHECK CONSTRAINT, >> that could be created as NOT VALID and validated without holding heavy >> locks. After adding not null you can drop the constraint. > > Thanks. Forgot about these. > > Best regards, > > depesz > > >
Re: Alter table fast
I don't think I qualify as an expert, but I can nominate a course of action for you to consider. Rather than convert an existing column from one data type to another, might it not be easier to simply recreate the table with the correct data type. There are caveats! You have to consider any referential integrity constraints. You have to have a LOT of available disk space. You really want to do this in batches. If you can swing those issues, you might have a shot. 1. CREATE TABLE LIKE ; 2. ALTER ALTER COLUMN TYPE ; 3. INSERT INTO SELECT -- recommend you do this in batches 4. DROP TABLE ; 5. ALTER TABLE RENAME to Regards, Gus On Thu, Jan 9, 2025 at 11:27 AM Ron Johnson wrote: > On Thu, Jan 9, 2025 at 11:25 AM veem v wrote: > >> Hello, >> It's postgres version 16.1, we want to convert an existing column data >> type from integer to numeric and it's taking a long time. The size of the >> table is ~50GB and the table has ~150million rows in it and it's not >> partitioned. We tried running the direct alter and it's going beyond hours, >> so wanted to understand from experts what is the best way to achieve this? >> >> > Out of curiosity, why NUMERIC(15,0) instead of BIGINT? > > -- > Death to , and butter sauce. > Don't boil me, I'm still alive. > lobster! >