I am experiencing a strange performance problem when accessing JSONB
content by primary key.
My DB version() is PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg14.04+1) on
x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4,
64-bit
postgres.conf: https://justpaste.it/6pzz1
uname -a: Linu
On Thu, Sep 20, 2018 at 05:07:21PM -0700, Vladimir Ryabtsev wrote:
> I am experiencing a strange performance problem when accessing JSONB
> content by primary key.
> I noticed that with some IDs it works pretty fast while with other it is
> 4-5 times slower. It is suitable to note, there are two m
> Was the data populated differently, too ?
Here is how new records were coming in last two month, by days:
https://i.stack.imgur.com/zp9WP.png During a day, records come evenly (in
both ranges), slightly faster in Europe and American work time.
Since Jul 1, 2018, when we started population by onl
Sorry, dropped -performance.
Has the table been reindexed (or pg_repack'ed) since loading (or vacuumed
for that matter) ?
>>> Not sure what you mean... We created indexes on some fields (on
>> I mean REINDEX INDEX articles_pkey;
>> Or (from "contrib"): /usr/pgsql-10/bin/pg_repack -i arti
Vladimir Ryabtsev wrote:
> explain (analyze, buffers)
> select count(*), sum(length(content::text)) from articles where article_id
> between %s and %s
>
> Sample output:
>
> Aggregate (cost=8635.91..8635.92 rows=1 width=16) (actual
> time=6625.993..6625.995 rows=1 loops=1)
> Buffers: shared
> Setting "track_io_timing = on" should measure the time spent doing I/O
more accurately.
I see I/O timings after this. It shows that 96.5% of long queries is spent
on I/O. If I subtract I/O time from total I get ~1,4 s for 5000 rows, which
is SAME for both ranges if I adjust segment borders accord