Re: Insert query performance
On Mon, Aug 19, 2024 at 4:33 PM David Rowley wrote: > On Mon, 19 Aug 2024 at 19:48, sud wrote: > > In a version 15.4 postgres database, Is it possible that , if we have > two big range partition tables with foreign key relationships between them, > insert into the child table can cause slowness if we don't have foreign key > index present in the child table? Basically it need to make sure the new > row already added to parent partition table or not. > > Having an index on the referencing columns is only useful for DELETEs > and UPDATEs affecting the foreign key column(s). For INSERTs to the > referencing table, technically having indexes there would only slow > down inserts due to the additional overhead of having to maintain the > index, however, the overhead of having the index might be fairly > minuscule when compared to performing a CASCADE UPDATE or DELETE to > the referencing table when the DDL is performed on the referenced > table. > > > And if there is any possible way(example query tracing etc) to get the > underlying system queries which gets triggered as part of the main insert > query? For example in above scenario, postgres must be executing some query > to check if the incoming row to the child table already exists in the > parent table or not? > > EXPLAIN ANALYZE will list the time it took to execute the foreign key > trigger in the "Trigger for constraint" section. > > David > Thank you so much David. If I get it correct , the index on the foreign key mainly helps improve the deletes/updates performance of the parent table , if the same FK column gets impacted from the parent table. (This might be the reason why our detach partition in the parent table runs long and never completes as we have no foreign key indexed). However, my initial understanding of "*having the FK index will improve the insert performance in the child table*" is not accurate it seems. Rather as you mentioned it may negatively impact the loading/insert performance because it has to now update the additional index in each insert. In case of insert into child table, to ensure if the child row is already present in the parent , it just scans the parent by the Primary key of the parent table (which is be default indexed) and thus it doesn't need an index in the child table foreign keys or having an index in the foreign key in the child table won't help the constraint validation faster. Please correct me if my understanding is wrong here. Additionally as you mentioned "explain analyze" will show a section on how much time it really takes for the constraint validation , I can see that section now. But it seems it will really need that INSERT statement to be executed and that we can't really do in production as that will physically insert data into the table. So do you mean to just do the "explain analyze" for the INSERT query and capture the plan and then do the rollback? And in our case it's a row by row insert happening , so we will see if we can club/sum that "constraint validation" time for a handful if insert somehow to get a better idea on the percentage of time we really spent in the constraint validation.
Re: Insert query performance
On Mon, Aug 19, 2024 at 1:25 PM Muhammad Ikram wrote: > Hi Sud, > > Please make following change in your postgresql.conf file > > log_statement = 'all' > > Will this put all the internal sql query or the recursive query entries in the pg_stats_statement view which we can analyze then? And also to debug issues in the production will it be a good idea to set it for a few times and then turn it off or it can have significant performance overhead.
Re: Insert query performance
It will record all statements in logs. If you are concerned about query times then you may use pg_stat_statements. Muhammad Ikram On Tue, 20 Aug 2024 at 12:19, sud wrote: > > On Mon, Aug 19, 2024 at 1:25 PM Muhammad Ikram wrote: > >> Hi Sud, >> >> Please make following change in your postgresql.conf file >> >> log_statement = 'all' >> >> > Will this put all the internal sql query or the recursive query entries in > the pg_stats_statement view which we can analyze then? And also to debug > issues in the production will it be a good idea to set it for a few times > and then turn it off or it can have significant performance overhead. >
Re: Insert query performance
On Tue, 20 Aug 2024 at 19:09, sud wrote: > However, my initial understanding of "having the FK index will improve the > insert performance in the child table" is not accurate it seems. Rather as > you mentioned it may negatively impact the loading/insert performance because > it has to now update the additional index in each insert. In case of insert > into child table, to ensure if the child row is already present in the parent > , it just scans the parent by the Primary key of the parent table (which is > be default indexed) and thus it doesn't need an index in the child table > foreign keys or having an index in the foreign key in the child table won't > help the constraint validation faster. Please correct me if my understanding > is wrong here. If you think about what must happen when you insert into the referencing table, the additional validation that the foreign key must do is check that a corresponding record exists in the referenced table. An index on the referencing table does not help speed that up. > Additionally as you mentioned "explain analyze" will show a section on how > much time it really takes for the constraint validation , I can see that > section now. But it seems it will really need that INSERT statement to be > executed and that we can't really do in production as that will physically > insert data into the table. So do you mean to just do the "explain analyze" > for the INSERT query and capture the plan and then do the rollback? And in > our case it's a row by row insert happening , so we will see if we can > club/sum that "constraint validation" time for a handful if insert somehow to > get a better idea on the percentage of time we really spent in the constraint > validation. I'd recommend performing a schema-only dump of your production database and experimenting well away from production. See pg_dump --schema-only. I also recommend not leaving performance to chance and testing the impact of index vs no index away from production with some data loaded that is representative of your production data (or use the production data if it's available and small enough to manage). Use pgbench to see what impact having the index on the referencing table has on performance on inserts into that table vs what improvements you gain from having the index when there's cascading delete from the referenced table. You might also want to look into auto_explain [1]. You can load this into a single session and set auto_explain.log_min_duration = 0, auto_explain.log_analyze = on and auto_explain.log_nested_statements = on. That should give you the plan for the cascade DELETE query that's executed by the trigger when you perform the DELETE on the referenced table. (Also see the note about auto_explain.log_timing) David [1] https://www.postgresql.org/docs/15/auto-explain.html
Re: Planet Postgres and the curse of AI
On Tue, Jul 23, 2024 at 12:45 PM Avinash Vallarapu < avinash.vallar...@gmail.com> wrote: > However, I do agree with Lawrence that it is impossible to prove whether > it is written by AI or a human. > AI can make mistakes and it might mistakenly point out that a blog is > written by AI (which I know is difficult to implement). > Right - I am not interested in "proving" things, but I think a policy to discourage overuse of AI is warranted. People may also use AI generated Images in their blogs, and they may be > meaningful for their article. > Is it only the content or also the images ? It might get too complicated > while implementing some rules. > Only the content, the images are perfectly fine. Even expected, these days. > Ultimately, Humans do make mistakes and we shouldn't discourage people > assuming it is AI that made that mistake. > Humans make mistakes. AI confidently hallucinates.
Re: Planet Postgres and the curse of AI
On Fri, Jul 19, 2024 at 3:22 AM Laurenz Albe wrote: > Why not say that authors who repeatedly post grossly counterfactual or > misleading content can be banned? > I like this, and feel we are getting closer. How about: "Posts should be technically and factually correct. Use of AI should be used for minor editing, not primary generation" (wordsmithing needed) Cheers, Greg
Looking for pg_config for postgresql 13.16
I am looking for pg_config for postgresql 13.16 that I run under Rocky Linux 9. It seems RL appstream latest version is pg_config in libpq-devel-13.11-1.el9.x86_64 but dnf complains: installed package postgresql13-"devel-13.16-2PGDG.rhel9.x86_64 obsoletes libpq-devel <= 42.0 provided by libpq-devel-13.11-1.el9.x86_64 from appstream" What is the recommended way around this? Thanks.
Re: Looking for pg_config for postgresql 13.16
On Tue, Aug 20, 2024 at 11:56 AM H wrote: > I am looking for pg_config for postgresql 13.16 that I run under Rocky > Linux 9. It seems RL appstream latest version is pg_config in > libpq-devel-13.11-1.el9.x86_64 but dnf complains: > installed package postgresql13-"devel-13.16-2PGDG.rhel9.x86_64 obsoletes > libpq-devel <= 42.0 provided by libpq-devel-13.11-1.el9.x86_64 from > appstream" > > What is the recommended way around this? > That doesn't make sense. /usr/pgsql-13/bin/pg_config should be in plain old postgresql13. This is PG14, but the package structure has been the same since at least PG 9.6: $ which pg_config /usr/pgsql-14/bin/pg_config $ yum whatprovides /usr/pgsql-14/bin/pg_config RHEL8-Pool for x86_64 2.9 kB/s | 251 B 00:00 rhel-8.9-x86_64-dvd 54 MB/s | 13 MB 00:00 RES-8-Updates for x86_6497 MB/s | 83 MB 00:00 RES-AS-8-Updates for x86_64 90 MB/s | 65 MB 00:00 RES-CB-8-Updates for x86_64 40 MB/s | 7.5 MB 00:00 RES8-Manager-Tools-Pool for x86_64 504 kB/s | 58 kB 00:00 RES8-Manager-Tools-Updates for x86_64 20 MB/s | 2.6 MB 00:00 supp-supplementary-8.9-rhel-8-x86_64-dvd 559 kB/s | 47 kB 00:00 postgresql14-14.13-2PGDG.rhel8.x86_64 : PostgreSQL client programs and libraries Repo: @System Matched from: Filename: /usr/pgsql-14/bin/pg_config -- Death to America, and butter sauce. Iraq lobster!
Re: Planet Postgres and the curse of AI
On 2024-08-20 22:44, Greg Sabino Mullane wrote: On Fri, Jul 19, 2024 at 3:22 AM Laurenz Albe wrote: Why not say that authors who repeatedly post grossly counterfactual or misleading content can be banned? I like this, and feel we are getting closer. How about: "Posts should be technically and factually correct. Use of AI should be used for minor editing, not primary generation" Sounds pretty sensible. :) + Justin
Re: Looking for pg_config for postgresql 13.16
Ron Johnson writes: > On Tue, Aug 20, 2024 at 11:56 AM H wrote: >> I am looking for pg_config for postgresql 13.16 that I run under Rocky >> Linux 9. It seems RL appstream latest version is pg_config in >> libpq-devel-13.11-1.el9.x86_64 but dnf complains: >> installed package postgresql13-"devel-13.16-2PGDG.rhel9.x86_64 obsoletes >> libpq-devel <= 42.0 provided by libpq-devel-13.11-1.el9.x86_64 from >> appstream" > That doesn't make sense. /usr/pgsql-13/bin/pg_config should be in plain > old postgresql13. I don't think the error is complaining that pg_config appears in both packages, it's just telling you that they are marked as being incompatible with each other. (There might be other files that are in both of those packages.) The easiest fix is likely to remove the libpq-devel package, expecting postgresql13-devel to provide whatever you needed from that. regards, tom lane
Does a partition key need to be part of a composite index for the planner to take advantage of it? (PG 16.3+)
We have a set of operational tables that are all partitioned by organization ID (customer ID) in the 100M row range. We also have 3-4 composite indexes on these tables that currently do not include the organization ID. Any queries that reference these tables always provide the organization ID as a discriminator. We recently started noticing that the query planner sequence scanning the correct partitions, but is not using the indexes. So we decided to run a test by creating a new set of composite indexes that mirror the existing ones but include organization_id as the first column in the composite index. When we create the composite index to include organization ID in the first position, then the planner both selects the correct partitions, AND index scans those partitions. Is that expected behavior and it is appropriate to include any partition keys as leading columns in any indexes on a partitioned table? One additional piece of information that may or may not be relevant: a couple weeks ago we upgraded from PG 16.1 to 16.3. In the release notes for 16.2, I did see some fixes pertaining to indexes on partitioned tables and collations. I couldn't find information on the actual fixes (my inexperience digging into PG support). I'm happy to provide some simple examples to illustrate what we are seeing if the behavior I'm describing is not expected. Thanks, Bill Kaper
Re: Planet Postgres and the curse of AI
On Wed, Aug 21, 2024 at 02:19:22AM +1000, Justin Clift wrote: > On 2024-08-20 22:44, Greg Sabino Mullane wrote: > > On Fri, Jul 19, 2024 at 3:22 AM Laurenz Albe > > wrote: > > > > > Why not say that authors who repeatedly post grossly counterfactual or > > > misleading content can be banned? > > > > I like this, and feel we are getting closer. How about: > > > > "Posts should be technically and factually correct. Use of AI should be > > used for minor editing, not primary generation" > > Sounds pretty sensible. :) Agreed. Honestly, some of the AI is so bad that if you see something you suspect is AI generated, you can just ask the author what they meant by that paragraph, and they will not be able to answer. -- Bruce Momjian https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
Re: Looking for pg_config for postgresql 13.16
On Tue, 2024-08-20 at 12:11 -0400, Ron Johnson wrote: > On Tue, Aug 20, 2024 at 11:56 AM H wrote: > > I am looking for pg_config for postgresql 13.16 that I run under Rocky > > Linux 9. It seems RL appstream latest version is pg_config in > > libpq-devel-13.11-1.el9.x86_64 but dnf complains: > > installed package postgresql13-"devel-13.16-2PGDG.rhel9.x86_64 obsoletes > > libpq-devel <= 42.0 provided by libpq-devel-13.11-1.el9.x86_64 from > > appstream" > > > > What is the recommended way around this? > > > > > That doesn't make sense. /usr/pgsql-13/bin/pg_config should be in plain old > postgresql13. > > This is PG14, but the package structure has been the same since at least PG > 9.6: > > $ which pg_config > /usr/pgsql-14/bin/pg_config > > $ yum whatprovides /usr/pgsql-14/bin/pg_config > RHEL8-Pool for x86_64 2.9 kB/s | 251 B 00:00 > rhel-8.9-x86_64-dvd 54 MB/s | 13 MB 00:00 > RES-8-Updates for x86_64 97 MB/s | 83 MB 00:00 > RES-AS-8-Updates for x86_64 90 MB/s | 65 MB 00:00 > RES-CB-8-Updates for x86_64 40 MB/s | 7.5 MB 00:00 > RES8-Manager-Tools-Pool for x86_64 504 kB/s | 58 kB 00:00 > RES8-Manager-Tools-Updates for x86_64 20 MB/s | 2.6 MB 00:00 > supp-supplementary-8.9-rhel-8-x86_64-dvd 559 kB/s | 47 kB 00:00 > postgresql14-14.13-2PGDG.rhel8.x86_64 : PostgreSQL client programs and > libraries > Repo : @System > Matched from: > Filename : /usr/pgsql-14/bin/pg_config > > -- > Death to America, and butter sauce. > Iraq lobster! I had not found it because it was not in the path. Found it and installed the temporal_tables extension so everything is good. Thank you for pointing me in the right direction.
insufficient privilege with pg_read_all_stats granted
Hey folks, I run PostgreSQL v15.8 (docker official image), and there is an issue when reading pg_stat_staments table with a result of query most of the times having `` value. I have created the user that I use to fetch the data with the following way: ``` CREATE USER abcd WITH NOSUPERUSER NOCREATEROLE NOINHERIT LOGIN; GRANT pg_read_all_stats, pg_stat_scan_tables, pg_read_all_settings to abcd; GRANT pg_monitor to abcd; ``` I explicitly gave `pg_read_all_stats` and also called `pg_monitor` just to be on the safe side, but stil I get the insufficient privilege error. ``` SELECT r.rolname AS member, m.rolname AS role FROM pg_auth_members am JOIN pg_roles r ON r.oid = am.member JOIN pg_roles m ON m.oid = am.roleid WHERE m.rolname = 'pg_read_all_stats' AND r.rolname = 'abcd'; member | role +--- abcd | pg_read_all_stats (1 row) ``` I also tried with PostgreSQL v14.13, and this was not the case, it was working fine as expected. Then I tried v16.4 and v17beta3, and I faced the issue, so I guess something changed v15 onwards?