Re: Why is my query 3 times faster on my workstation than on my server?

2025-12-04 Thread Pavel Stehule
Hi

čt 4. 12. 2025 v 19:46 odesílatel Vincent Veyron 
napsal:

> Hi,
>
> Using the same query, with the same database on both machine, plans and
> estimates are quasi identical, but actual cost is multiplied by three on my
> server compared to my workstation, for all nodes in the plan. Can you tell
> me what explains the difference?
>
> I work with PostgreSQL 15.14 on Debian Old Stable (bookworm).
>
> My workstation is a Lenovo X250 with 8GB RAM and an Intel(R) Core(TM)
> i5-5300U CPU @ 2.30GHz (4 cores)
> cpu MHz : 500.000
> cache size  : 3072 KB
>
> My server is a Start-3-S-SSD server from online.net with 4 GB RAM and an
> Intel(R) Atom(TM) CPU  C2338  @ 1.74GHz (2 cores)
> cpu MHz : 1198.820
> cache size  : 1024 KB
>

Intel Atom is slow CPU

https://en.wikipedia.org/wiki/Intel_Atom

Regards

Pavel


>
> The query selects ~ 18 000 rows out of ~ 100 000 in table 'tbljournal',
> calcultates several window functions on the results, then joins to another
> small table (10 000 rows).
>
> Below the two plans, followed by non-standard settings in postgresql.conf
> (they are identical on both machines), and the table's schema at the bottom.
>
> ##
> Explain analyze on the workstation
> ##
>
> 2025-12-04 17:09:28.133 CET [14576] www-data@compta LOG:  duration:
> 210.004 ms  plan:
> Query Text:
> WITH t1 AS NOT MATERIALIZED (
> SELECT id_client, fiscal_year, numero_compte, id_entry, id_line,
> date_ecriture, substring(libelle_journal FOR 24) as libelle_journal,
> substring(coalesce(id_facture, ' ') FOR 14) as id_facture,
> substring(coalesce(id_paiement, ' ') FOR 14) as id_paiement,
> substring(coalesce(libelle, ' ') FOR 34) as libelle,
> debit/100::numeric as debit, credit/100::numeric as credit, lettrage,
> pointage, 0 as lettrage_check, to_char(sum(debit/100::numeric) over
> (PARTITION BY numero_compte), '999G999G999G990D00') as total_debit,
> to_char(sum(credit/100::numeric) over (PARTITION BY numero_compte),
> '999G999G999G990D00') as total_credit,
> to_char(sum((credit-debit)/100::numeric) over (PARTITION BY numero_compte
> ORDER BY date_ecriture, id_line), '999G999G999G990D00') as solde,
> to_char(sum(debit/100::numeric) over (), '999G999G999G990D00') as
> grand_total_debit, to_char(sum(credit/100::numeric) over (),
> '999G999G999G990D00') as grand_total_credit, count(*) over () as lines,
> coalesce(libelle_section, '') as libelle_section, row_number() over (ORDER
> BY numero_compte, date_ecriture, id_line) as row_number
> FROM tbljournal
> WHERE id_client = $1 and fiscal_year = $2
> ORDER BY numero_compte, date_ecriture, id_line
> )
> SELECT t1.numero_compte, regexp_replace(t2.libelle_compte, '\s',
> ' ', 'g') as libelle_compte, id_entry, id_line, date_ecriture,
> libelle_journal, coalesce(id_facture, ' ') as id_facture,
> coalesce(id_paiement, ' ') as id_paiement, coalesce(libelle, ' ')
> as libelle, to_char(debit, '999G999G999G990D00') as debit, to_char(credit,
> '999G999G999G990D00') as credit, lettrage_check, pointage, total_debit,
> total_credit, solde, grand_total_debit, grand_total_credit,
> libelle_section, lettrage, lines
> FROM t1 INNER JOIN tblcompte t2 using (id_client, fiscal_year,
> numero_compte)
> WHERE row_number > 2000 AND row_number < 3001
>
> ORDER BY row_number
>
>
> Sort  (cost=3925.35..3926.60 rows=501 width=458) (actual
> time=208.061..208.142 rows=1000 loops=1)
>   Sort Key: t1.row_number
>   Sort Method: quicksort  Memory: 384kB
>   Buffers: shared hit=3565, temp read=341 written=298
>   ->  Hash Join  (cost=352.95..3902.88 rows=501 width=458) (actual
> time=137.771..206.979 rows=1000 loops=1)
> Hash Cond: ((t1.id_client = t2.id_client) AND
> (t1.fiscal_year = t2.fiscal_year) AND (t1.numero_compte = t2.numero_compte))
> Buffers: shared hit=3565, temp read=341 written=298
> ->  Subquery Scan on t1  (cost=1.75..3520.49 rows=3484
> width=434) (actual time=132.109..195.096 rows=1000 loops=1)
>   Filter: (t1.row_number > 2000)
>   Rows Removed by Filter: 2000
>   Buffers: shared hit=3480, temp read=341 written=298
>   ->  WindowAgg  (cost=1.75..3389.84 rows=10452
> width=434) (actual time=123.125..194.702 rows=3000 loops=1)
> Filter: ((row_number() OVER (?)) < 3001)
> Rows Removed by Filter: 15188
> Buffers: shared hit=3480, temp read=341
> written=298
> ->  WindowAgg  (cost=1.75..2762.72 rows=10452
> width=223) (actual time=0.122..96.685 rows=18188 loops=1)
>   Buffers: shared hit=3480
>   ->  WindowAgg  (cost=1.75..2475.29
> rows=10452 width=159) (actual time=0.113..70.644 rows=181

Re: Why is my query 3 times faster on my workstation than on my server?

2025-12-04 Thread Tom Lane
Pavel Stehule  writes:
> čt 4. 12. 2025 v 19:46 odesílatel Vincent Veyron 
> napsal:
>> Using the same query, with the same database on both machine, plans and
>> estimates are quasi identical, but actual cost is multiplied by three on my
>> server compared to my workstation, for all nodes in the plan. Can you tell
>> me what explains the difference?
>> 
>> I work with PostgreSQL 15.14 on Debian Old Stable (bookworm).
>> 
>> My workstation is a Lenovo X250 with 8GB RAM and an Intel(R) Core(TM)
>> i5-5300U CPU @ 2.30GHz (4 cores)
>> cpu MHz : 500.000
>> cache size  : 3072 KB
>> 
>> My server is a Start-3-S-SSD server from online.net with 4 GB RAM and an
>> Intel(R) Atom(TM) CPU  C2338  @ 1.74GHz (2 cores)
>> cpu MHz : 1198.820
>> cache size  : 1024 KB

> Intel Atom is slow CPU

Yeah.  Check out the ratings at

https://browser.geekbench.com

They seem to think the i5-5300U is a 2-core device, maybe you are
counting hyperthreading?  Anyway, the scores for it are around
900-1000 single-core and 1600-ish multi-core.  The Atom C2338
shows up around 140-150 single-core and 250-ish multi-core;
plus, those ratings mention 2400MHz which is faster than the
clock speed you are showing.

So from these numbers I'd ask not "why 3x slower?" but "why only
3x slower?".  Maybe your queries are partly disk-bound.

regards, tom lane




RE: Why is my query 3 times faster on my workstation than on my server?

2025-12-04 Thread Clay Jackson (cjackson)

Slower CPU, less RAM

Clay Jackson


From: Pavel Stehule 
Sent: Thursday, December 4, 2025 10:54 AM
To: Vincent Veyron 
Cc: [email protected]
Subject: Re: Why is my query 3 times faster on my workstation than on my server?

CAUTION: This email originated from outside of the organization. Do not follow 
guidance, click links, or open attachments unless you recognize the sender and 
know the content is safe.

Hi

čt 4. 12. 2025 v 19:46 odesílatel Vincent Veyron 
mailto:[email protected]>> napsal:
Hi,

Using the same query, with the same database on both machine, plans and 
estimates are quasi identical, but actual cost is multiplied by three on my 
server compared to my workstation, for all nodes in the plan. Can you tell me 
what explains the difference?

I work with PostgreSQL 15.14 on Debian Old Stable (bookworm).

My workstation is a Lenovo X250 with 8GB RAM and an Intel(R) Core(TM) i5-5300U 
CPU @ 2.30GHz (4 cores)
cpu MHz : 500.000
cache size  : 3072 KB

My server is a Start-3-S-SSD server from online.net with 4 
GB RAM and an Intel(R) Atom(TM) CPU  C2338  @ 1.74GHz (2 cores)
cpu MHz : 1198.820
cache size  : 1024 KB

Intel Atom is slow CPU

https://en.wikipedia.org/wiki/Intel_Atom

Regards

Pavel


The query selects ~ 18 000 rows out of ~ 100 000 in table 'tbljournal', 
calcultates several window functions on the results, then joins to another 
small table (10 000 rows).

Below the two plans, followed by non-standard settings in postgresql.conf (they 
are identical on both machines), and the table's schema at the bottom.

##
Explain analyze on the workstation
##

2025-12-04 17:09:28.133 CET [14576] www-data@compta LOG:  duration: 210.004 ms  
plan:
Query Text:
WITH t1 AS NOT MATERIALIZED (
SELECT id_client, fiscal_year, numero_compte, id_entry, id_line, 
date_ecriture, substring(libelle_journal FOR 24) as libelle_journal, 
substring(coalesce(id_facture, ' ') FOR 14) as id_facture, 
substring(coalesce(id_paiement, ' ') FOR 14) as id_paiement, 
substring(coalesce(libelle, ' ') FOR 34) as libelle, debit/100::numeric as 
debit, credit/100::numeric as credit, lettrage, pointage, 0 as lettrage_check, 
to_char(sum(debit/100::numeric) over (PARTITION BY numero_compte), 
'999G999G999G990D00') as total_debit, to_char(sum(credit/100::numeric) over 
(PARTITION BY numero_compte), '999G999G999G990D00') as total_credit, 
to_char(sum((credit-debit)/100::numeric) over (PARTITION BY numero_compte ORDER 
BY date_ecriture, id_line), '999G999G999G990D00') as solde, 
to_char(sum(debit/100::numeric) over (), '999G999G999G990D00') as 
grand_total_debit, to_char(sum(credit/100::numeric) over (), 
'999G999G999G990D00') as grand_total_credit, count(*) over () as lines, 
coalesce(libelle_section, '') as libelle_section, row_number() over (ORDER BY 
numero_compte, date_ecriture, id_line) as row_number
FROM tbljournal
WHERE id_client = $1 and fiscal_year = $2
ORDER BY numero_compte, date_ecriture, id_line
)
SELECT t1.numero_compte, regexp_replace(t2.libelle_compte, '\s', 
' ', 'g') as libelle_compte, id_entry, id_line, date_ecriture, 
libelle_journal, coalesce(id_facture, ' ') as id_facture, 
coalesce(id_paiement, ' ') as id_paiement, coalesce(libelle, ' ') as 
libelle, to_char(debit, '999G999G999G990D00') as debit, to_char(credit, 
'999G999G999G990D00') as credit, lettrage_check, pointage, total_debit, 
total_credit, solde, grand_total_debit, grand_total_credit, libelle_section, 
lettrage, lines
FROM t1 INNER JOIN tblcompte t2 using (id_client, fiscal_year, 
numero_compte)
WHERE row_number > 2000 AND row_number < 3001

ORDER BY row_number


Sort  (cost=3925.35..3926.60 rows=501 width=458) (actual 
time=208.061..208.142 rows=1000 loops=1)
  Sort Key: t1.row_number
  Sort Method: quicksort  Memory: 384kB
  Buffers: shared hit=3565, temp read=341 written=298
  ->  Hash Join  (cost=352.95..3902.88 rows=501 width=458) (actual 
time=137.771..206.979 rows=1000 loops=1)
Hash Cond: ((t1.id_client = t2.id_client) AND (t1.fiscal_year = 
t2.fiscal_year) AND (t1.numero_compte = t2.numero_compte))
Buffers: shared hit=3565, temp read=341 written=298
->  Subquery Scan on t1  (cost=1.75..3520.49 rows=3484 
width=434) (actual time=132.109..195.096 rows=1000 loops=1)
  Filter: (t1.row_number > 2000)
  Rows Removed by Filter: 2000
  Buffers: shared hit=3480, temp read=341 written=298
  ->  WindowAgg  (cost=1.75..3389.84 rows=10452 width=434) 
(actual time=123.125..194.702 rows=3000 loops=1)
Filter: ((row_number() OVER (?)) < 3001)
Rows Removed by Filter: 15188
Buffers: shared hit=3480, temp read

Why is my query 3 times faster on my workstation than on my server?

2025-12-04 Thread Vincent Veyron
Hi,

Using the same query, with the same database on both machine, plans and 
estimates are quasi identical, but actual cost is multiplied by three on my 
server compared to my workstation, for all nodes in the plan. Can you tell me 
what explains the difference? 

I work with PostgreSQL 15.14 on Debian Old Stable (bookworm).

My workstation is a Lenovo X250 with 8GB RAM and an Intel(R) Core(TM) i5-5300U 
CPU @ 2.30GHz (4 cores)
cpu MHz : 500.000
cache size  : 3072 KB

My server is a Start-3-S-SSD server from online.net with 4 GB RAM and an 
Intel(R) Atom(TM) CPU  C2338  @ 1.74GHz (2 cores)
cpu MHz : 1198.820
cache size  : 1024 KB

The query selects ~ 18 000 rows out of ~ 100 000 in table 'tbljournal', 
calcultates several window functions on the results, then joins to another 
small table (10 000 rows).

Below the two plans, followed by non-standard settings in postgresql.conf (they 
are identical on both machines), and the table's schema at the bottom.

##
Explain analyze on the workstation
##

2025-12-04 17:09:28.133 CET [14576] www-data@compta LOG:  duration: 210.004 ms  
plan:
Query Text: 
WITH t1 AS NOT MATERIALIZED (
SELECT id_client, fiscal_year, numero_compte, id_entry, id_line, 
date_ecriture, substring(libelle_journal FOR 24) as libelle_journal, 
substring(coalesce(id_facture, ' ') FOR 14) as id_facture, 
substring(coalesce(id_paiement, ' ') FOR 14) as id_paiement, 
substring(coalesce(libelle, ' ') FOR 34) as libelle, debit/100::numeric as 
debit, credit/100::numeric as credit, lettrage, pointage, 0 as lettrage_check, 
to_char(sum(debit/100::numeric) over (PARTITION BY numero_compte), 
'999G999G999G990D00') as total_debit, to_char(sum(credit/100::numeric) over 
(PARTITION BY numero_compte), '999G999G999G990D00') as total_credit, 
to_char(sum((credit-debit)/100::numeric) over (PARTITION BY numero_compte ORDER 
BY date_ecriture, id_line), '999G999G999G990D00') as solde, 
to_char(sum(debit/100::numeric) over (), '999G999G999G990D00') as 
grand_total_debit, to_char(sum(credit/100::numeric) over (), 
'999G999G999G990D00') as grand_total_credit, count(*) over () as lines, 
coalesce(libelle_section, '') as libelle_section, row_number() over (ORDER BY 
numero_compte, date_ecriture, id_line) as row_number
FROM tbljournal
WHERE id_client = $1 and fiscal_year = $2  
ORDER BY numero_compte, date_ecriture, id_line 
) 
SELECT t1.numero_compte, regexp_replace(t2.libelle_compte, '\s', 
' ', 'g') as libelle_compte, id_entry, id_line, date_ecriture, 
libelle_journal, coalesce(id_facture, ' ') as id_facture, 
coalesce(id_paiement, ' ') as id_paiement, coalesce(libelle, ' ') as 
libelle, to_char(debit, '999G999G999G990D00') as debit, to_char(credit, 
'999G999G999G990D00') as credit, lettrage_check, pointage, total_debit, 
total_credit, solde, grand_total_debit, grand_total_credit, libelle_section, 
lettrage, lines
FROM t1 INNER JOIN tblcompte t2 using (id_client, fiscal_year, 
numero_compte)
WHERE row_number > 2000 AND row_number < 3001

ORDER BY row_number

 
Sort  (cost=3925.35..3926.60 rows=501 width=458) (actual 
time=208.061..208.142 rows=1000 loops=1)
  Sort Key: t1.row_number
  Sort Method: quicksort  Memory: 384kB
  Buffers: shared hit=3565, temp read=341 written=298
  ->  Hash Join  (cost=352.95..3902.88 rows=501 width=458) (actual 
time=137.771..206.979 rows=1000 loops=1)
Hash Cond: ((t1.id_client = t2.id_client) AND (t1.fiscal_year = 
t2.fiscal_year) AND (t1.numero_compte = t2.numero_compte))
Buffers: shared hit=3565, temp read=341 written=298
->  Subquery Scan on t1  (cost=1.75..3520.49 rows=3484 
width=434) (actual time=132.109..195.096 rows=1000 loops=1)
  Filter: (t1.row_number > 2000)
  Rows Removed by Filter: 2000
  Buffers: shared hit=3480, temp read=341 written=298
  ->  WindowAgg  (cost=1.75..3389.84 rows=10452 width=434) 
(actual time=123.125..194.702 rows=3000 loops=1)
Filter: ((row_number() OVER (?)) < 3001)
Rows Removed by Filter: 15188
Buffers: shared hit=3480, temp read=341 written=298
->  WindowAgg  (cost=1.75..2762.72 rows=10452 
width=223) (actual time=0.122..96.685 rows=18188 loops=1)
  Buffers: shared hit=3480
  ->  WindowAgg  (cost=1.75..2475.29 rows=10452 
width=159) (actual time=0.113..70.644 rows=18188 loops=1)
Run Condition: (row_number() OVER (?) < 
3001)
Buffers: shared hit=3480
->  WindowAgg  (cost=1.75..2266.25 
rows=10452 width

Re: Why is my query 3 times faster on my workstation than on my server?

2025-12-04 Thread Vincent Veyron
On Thu, 04 Dec 2025 14:06:40 -0500
Tom Lane  wrote:

Hi Tom,

> 
> Yeah.  Check out the ratings at
> 
> https://browser.geekbench.com
> 

I guess these ratings will come handy to help pick my next server, thank you.

>They seem to think the i5-5300U is a 2-core device, maybe you are counting 
>hyperthreading?
 
'cat /proc/cpuinfo' lists 4 processors (0 to 3), which I assumed to be cores, 
but my knowledge about hardware is very limited, so I may be misinterpreting 
(pasted it at the bottom of this message)

> So from these numbers I'd ask not "why 3x slower?" but "why only
> 3x slower?".  Maybe your queries are partly disk-bound.

Not sure how I can check this, but here is the plan for the same query, ran on 
an HDD with the same configuration as the others. It incurs a 30% performance 
penalty, compared to the bad SDD.

This machine is a kimsufi dedicated server, advertised as KS-3 - Atom N2800 - 
4GB DDR3 1066 MHz 
although cpuinfo tells me :
model name  : Intel(R) Atom(TM) CPU N2800   @ 1.86GHz



   QUERY PLAN   
  

 Sort  (cost=4534.02..4535.23 rows=485 width=458) (actual 
time=1237.451..1237.695 rows=1000 loops=1)
   Sort Key: t1.row_number
   Sort Method: quicksort  Memory: 384kB
   ->  Hash Join  (cost=3002.19..4512.38 rows=485 width=458) (actual 
time=860.457..1232.407 rows=1000 loops=1)
 Hash Cond: ((t1.id_client = t2.id_client) AND (t1.fiscal_year = 
t2.fiscal_year) AND (t1.numero_compte = t2.numero_compte))
 ->  Subquery Scan on t1  (cost=2640.43..4121.08 rows=3290 width=434) 
(actual time=809.599..1146.165 rows=1000 loops=1)
   Filter: (t1.row_number > 2000)
   Rows Removed by Filter: 2000
   ->  WindowAgg  (cost=2640.43..3997.69 rows=9871 width=434) 
(actual time=765.978..1144.400 rows=3000 loops=1)
 Filter: ((row_number() OVER (?)) < 3001)
 Rows Removed by Filter: 15188
 ->  WindowAgg  (cost=2640.43..3405.43 rows=9871 width=198) 
(actual time=209.964..649.234 rows=18188 loops=1)
   ->  WindowAgg  (cost=2640.43..3133.98 rows=9871 
width=134) (actual time=209.920..491.704 rows=18188 loops=1)
 Run Condition: (row_number() OVER (?) < 3001)
 ->  WindowAgg  (cost=2640.43..2936.56 
rows=9871 width=126) (actual time=209.879..405.444 rows=18188 loops=1)
   ->  Sort  (cost=2640.43..2665.10 
rows=9871 width=94) (actual time=209.829..220.354 rows=18188 loops=1)
 Sort Key: 
tbljournal.numero_compte, tbljournal.date_ecriture, tbljournal.id_line
 Sort Method: quicksort  Memory: 
3299kB
 ->  Bitmap Heap Scan on tbljournal 
 (cost=137.47..1985.54 rows=9871 width=94) (actual time=6.990..43.471 
rows=18188 loops=1)
   Recheck Cond: ((id_client = 
2739) AND (fiscal_year = 2024))
   Heap Blocks: exact=857
   ->  Bitmap Index Scan on 
tbljournal_client_year_libelle_journal_idx  (cost=0.00..135.00 rows=9871 
width=0) (actual time=6.212..6.212 rows=18188 loops=1)
 Index Cond: 
((id_client = 2739) AND (fiscal_year = 2024))
 ->  Hash  (cost=187.55..187.55 rows=9955 width=36) (actual 
time=50.708..50.710 rows=9955 loops=1)
   Buckets: 16384  Batches: 1  Memory Usage: 813kB
   ->  Seq Scan on tblcompte t2  (cost=0.00..187.55 rows=9955 
width=36) (actual time=0.059..18.644 rows=9955 loops=1)
 Planning Time: 8.203 ms
 Execution Time: 1241.020 ms
(28 rows)


#
cat /proc/cpuinfo on i5-5300U
#

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 61
model name  : Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz
stepping: 4
microcode   : 0x2f
cpu MHz : 2294.544
cache size  : 3072 KB
physical id : 0
siblings: 4
core id : 0
cpu cores   : 2
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 20
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc cpuid aperf