why does this query not use a parallel query
Have a query: explain analyze SELECT minion_id FROM mob_player_mob_118 WHERE player_id = 55351078; QUERY PLAN - Index Only Scan using mob_player_mob_118_pkey on mob_player_mob_118 (cost=0.44..117887.06 rows=4623076 width=4) (actual time=0.062..3716.105 rows=4625123 loops=1) Index Cond: (player_id = 55351078) Heap Fetches: 1152408 Planning time: 0.241 ms Execution time: 5272.171 ms If I just get the count it will use a parallel query explain analyze SELECT count(minion_id) FROM mob_player_mob_118 WHERE player_id = 55351078; Thanks Dave Cramer
Re: [PERFORM] Dissuade the use of exclusion constraint index
Adam, I think the first thing to do is to make hackers aware of the specifics of which indexes are being used etc so that the planner could be taught how to use better ones. Self contained examples do wonders Dave Cramer [email protected] www.postgresintl.com On 11 April 2018 at 01:59, Adam Brusselback wrote: > Just wondering if anyone has any thoughts on what I can do to alleviate > this issue? > > I'll kinda at a loss as to what to try to tweak for this. >
Re: Function call very slow from JDBC/java but super fast from DBear
On Tue, 8 Aug 2023 at 17:07, An, Hongguo (CORP) wrote:
> Hi:
>
> I have a function, if I call it from DBeaver, it returns within a minute.
>
>
>
> *call* commonhp.run_unified_profile_load_script_work_assignment_details(
> 'BACDHP', 'G3XPM6YE2JHMSQA2');
>
>
>
>
>
> but if I called it from spring jdbc template, it never comes back:
>
> *public* *void* runTransform(String proc, InitDataLoadEntity entity)
> {
>
> *log*.info("Initial data finished data migration for {},
> starting transform for {}...", entity.getOrganizationOid(), proc);
>
> *var* schema = clientDbInfo.getSchema(entity
> .getOrganizationOid())[1].toUpperCase();
>
> *var* count = unifiedProfileJdbcTemplate.update("call commonhp."
> + proc + "(?, ?)", schema, entity.getOrganizationOid());
>
> *log*.info("Initial data finished data migration for {}, end
> transform for {}, result is {}", entity.getOrganizationOid(), proc, count
> );
>
> }
>
>
>
>
>
> The server does show high CPU, the function has mainly just one insert
> command (batch insert), the target table has 3 FKs.
>
The main difference is that we are going to use an unnamed statement to run
this.
Do you have server logs to see the statement being executed ?
explain plan(s)
Dave
>
>
> Please help.
>
> Thanks
>
> Andrew
>
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, notify the sender immediately by
> return email and delete the message and any attachments from your system.
>
Re: Performance problems with Postgres JDBC 42.4.2
On Mon, 6 Nov 2023 at 09:59, Jose Osinde wrote:
>
> Dear all,
>
> I'm running a query from Java on a postgres database:
>
> Java version: 17
> JDBC version: 42.4.2
> Postgres version: 13.1
>
> In parallel I'm testing the same queries from pgAdmin 4 version 6.13
>
> The tables I'm using contains more than 10million rows each and I have two
> questions here:
>
> 1. I need to extract the path of a file without the file itself. For this
> I use two alternatives as I found that sentence "A" is much faster than
> the "B" one:
>
> "A" sentence:
>
> SELECT DISTINCT ( LEFT(opf.file_path, length(opf.file_path) - position('/'
> in reverse(opf.file_path))) ) AS path
>FROM product AS op JOIN product_file AS opf ON
> opf.product_id = op.id
>WHERE op.proprietary_end_date <= CURRENT_DATE
> AND op.id LIKE 'urn:esa:psa:%'
>
> "B" sentence:
>
> SELECT DISTINCT ( regexp_replace(opf.file_path, '(.*)\/(.*)$', '\1') ) AS
> path
>FROM product AS op JOIN product_file AS opf ON
> opf.product_id = op.id
>WHERE op.proprietary_end_date <= CURRENT_DATE
> AND op.id LIKE 'urn:esa:psa:%'
>
> 2. Running sentence "A" on the pgAdmin client takes 4-5 minutes to finish
> but running it from a Java program it never ends. This is still the case
> when I limit the output to the first 100 rows so I assume this is not a
> problem with the amount of data being transferred but the way postgres
> resolve the query. To make it work in Java I had to define a postgres
> function that I call from the Java code instead of running the query
> directly.
>
> I had a similar problem in the past with a query that performed very
> poorly from a Java client while it was fine from pgAdmin or a python
> script. In that case it was a matter of column types not compatible with
> the JDBC (citext) deriving in an implicit cast that prevented the
> postgres engine from using a given index or to cast all the values of that
> column before using it, not sure now. But I don't think this is not the
> case here.
>
> Could anyone help me again?
>
Can you share your java code ?
If you are using a PreparedStatement the driver will use the extended
protocol which may be slower. Statements use SimpleQuery which is faster
and more like pgadmin
Issuing a Query and Processing the Result | pgJDBC (postgresql.org)
Dave
>
>
Re: Query out of memory
On Tue, 19 Oct 2021 at 05:54, Adam Brusselback wrote: > That work_mem value could be way too high depending on how much ram your > server has...which would be a very important bit of information to help > figure this out. Also, what Postgres / OS versions? > WORK_MEM is definitely too high. With 250 connections there is no way you could allocate 2G to each one of them if needed Dave Cramer www.postgres.rocks
