Re: rum index supported on pg13?

2020-10-21 Thread John the Scott
excellent.  pg13+rum passes all our tests.

thanks-john

On Thu, Oct 15, 2020 at 3:18 AM Teodor Sigaev  wrote:
>
> Done, thank you
>
> On 14.10.2020 19:30, Devrim Gündüz wrote:
> >
> > Hi Teodor,
> >
> > On Wed, 2020-10-14 at 18:49 +0300, Teodor Sigaev wrote:
> >> Thank you, fixed and published.
> >
> > Can you please release a new tarball? We need that to build the RPM
> > packages. I'm still seeing 1.3.6 as the latest version.
> >
> > Thanks!
> >
> > Regards,
> >
>
> --
> Teodor Sigaev  E-mail: teo...@sigaev.ru
>WWW: http://www.sigaev.ru/



-- 
Fast is fine, But accuracy is final.
You must learn to be slow in a hurry.
- Wyatt Earp




RUM and WAL-generation

2020-10-21 Thread Andreas Joseph Krogh

Hi all. 

The RUM-index is very nice, but indexes get very large and produce insane 
amounts of WAL. 
Due to the way PG works (IIUC), updating an int-column in a row produces lots 
of WAL because the whole row is duplicated, and if that row holds RUM-indexed 
columns it gets really bad... 
We hav actually run out of WAL-space in production because of this. 

I see this TODO-entry in RUM: 


 * Improve GENERIC WAL to support shift (PostgreSQL core changes). 
What is the status on this? 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com  
www.visena.com  
  

Re: Pgpool2 Service Won't Start

2020-10-21 Thread alanhi
Hi Bo,

I replaced the pgpool.conf file with the original file, reconfigured and it
works now.  Interesting because it was pretty much the same settings after
reconfiguring again.  Must be a setting that I missed.

Thanks,
Alan


Bo Peng wrote
> Hi,
> 
> On Fri, 16 Oct 2020 14:33:33 -0700 (MST)
> alanhi <

> tnguyen3@

> > wrote:
> 
>> Hi,
>> 
>> Hi, my pgpool status indicates that it failed even after trying to
>> restart
>> the service.  What can I check to see what configurations may be causing
>> this?  Thanks.
>> 
>> Alan
>> 
>> ● pgpool2.service - pgpool-II
>>Loaded: loaded (/lib/systemd/system/pgpool2.service; enabled; vendor
>> preset: enabled)
>>Active: failed (Result: exit-code) since Fri 2020-10-16 10:16:21 HST;
>> 10min ago
>>  Docs: man:pgpool(8)
>>   Process: 20052 ExecStart=/usr/sbin/pgpool -n (code=exited, status=3)
>>  Main PID: 20052 (code=exited, status=3)
> 
> You may go to https://www.pgpool.net/mailman/listinfo/pgpool-general to
> ask questions.
> Could you share the result of "systemctl status pgpool" and pgpool log?
>  
>> --
>> Sent from:
>> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>> 
>> 
> 
> 
> -- 
> Bo Peng <

> pengbo@.co

> >
> SRA OSS, Inc. Japan


Bo Peng wrote
> Hi,
> 
> On Fri, 16 Oct 2020 14:33:33 -0700 (MST)
> alanhi <

> tnguyen3@

> > wrote:
> 
>> Hi,
>> 
>> Hi, my pgpool status indicates that it failed even after trying to
>> restart
>> the service.  What can I check to see what configurations may be causing
>> this?  Thanks.
>> 
>> Alan
>> 
>> ● pgpool2.service - pgpool-II
>>Loaded: loaded (/lib/systemd/system/pgpool2.service; enabled; vendor
>> preset: enabled)
>>Active: failed (Result: exit-code) since Fri 2020-10-16 10:16:21 HST;
>> 10min ago
>>  Docs: man:pgpool(8)
>>   Process: 20052 ExecStart=/usr/sbin/pgpool -n (code=exited, status=3)
>>  Main PID: 20052 (code=exited, status=3)
> 
> You may go to https://www.pgpool.net/mailman/listinfo/pgpool-general to
> ask questions.
> Could you share the result of "systemctl status pgpool" and pgpool log?
>  
>> --
>> Sent from:
>> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>> 
>> 
> 
> 
> -- 
> Bo Peng <

> pengbo@.co

> >
> SRA OSS, Inc. Japan


Bo Peng wrote
> Hi,
> 
> On Fri, 16 Oct 2020 14:33:33 -0700 (MST)
> alanhi <

> tnguyen3@

> > wrote:
> 
>> Hi,
>> 
>> Hi, my pgpool status indicates that it failed even after trying to
>> restart
>> the service.  What can I check to see what configurations may be causing
>> this?  Thanks.
>> 
>> Alan
>> 
>> ● pgpool2.service - pgpool-II
>>Loaded: loaded (/lib/systemd/system/pgpool2.service; enabled; vendor
>> preset: enabled)
>>Active: failed (Result: exit-code) since Fri 2020-10-16 10:16:21 HST;
>> 10min ago
>>  Docs: man:pgpool(8)
>>   Process: 20052 ExecStart=/usr/sbin/pgpool -n (code=exited, status=3)
>>  Main PID: 20052 (code=exited, status=3)
> 
> You may go to https://www.pgpool.net/mailman/listinfo/pgpool-general to
> ask questions.
> Could you share the result of "systemctl status pgpool" and pgpool log?
>  
>> --
>> Sent from:
>> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>> 
>> 
> 
> 
> -- 
> Bo Peng <

> pengbo@.co

> >
> SRA OSS, Inc. Japan





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




using psql 11.4 with a server 13.0 && meta commands

2020-10-21 Thread Matthias Apitz


Hello,

I've found the following problem using psql 11.4 against a server 13.0:

$ export LANG=en_US.UTF-8
$ psql -Usisis -dsisis
psql (11.4, server 13.0)
WARNING: psql major version 11, server major version 13.
 Some psql features might not work.
Type "help" for help.

sisis=# \d dbctest
FEHLER:  Spalte c.relhasoids existiert nicht
LINE 1: ...riggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoi...
 ^
sisis=#

Is this expected? Are there other bigger issues with commands. I see the
message "Some psql features might not work.", but that this affects also
"normal" commands of daily use...

Thanks

matthias
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
Без книги нет знания, без знания нет коммунизма (Влaдимир Ильич Ленин)
Without books no knowledge - without knowledge no communism (Vladimir Ilyich 
Lenin)
Sin libros no hay saber - sin saber no hay comunismo. (Vladimir Ilich Lenin)




Re: using psql 11.4 with a server 13.0 && meta commands

2020-10-21 Thread Ian Lawrence Barwick
2020年10月21日(水) 20:54 Matthias Apitz :
>
>
> Hello,
>
> I've found the following problem using psql 11.4 against a server 13.0:
>
> $ export LANG=en_US.UTF-8
> $ psql -Usisis -dsisis
> psql (11.4, server 13.0)
> WARNING: psql major version 11, server major version 13.
>  Some psql features might not work.
> Type "help" for help.
>
> sisis=# \d dbctest
> FEHLER:  Spalte c.relhasoids existiert nicht
> LINE 1: ...riggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoi...
>  ^
> sisis=#
>
> Is this expected? Are there other bigger issues with commands. I see the
> message "Some psql features might not work.", but that this affects also
> "normal" commands of daily use...

Yes, this is fully expected - the system catalogue may change between
server versions, and
hence the SQL needed for the meta-commands. Any psql version should be
backwards-
compatible with older server versions, but older psql versions cannot
of course know what
SQL might needed for newer server versions. Hence the warning.


Regards

Ian Barwick
-- 
EnterpriseDB: https://www.enterprisedb.com




Re: using psql 11.4 with a server 13.0 && meta commands

2020-10-21 Thread Adrian Klaver

On 10/21/20 4:54 AM, Matthias Apitz wrote:


Hello,

I've found the following problem using psql 11.4 against a server 13.0:

$ export LANG=en_US.UTF-8
$ psql -Usisis -dsisis
psql (11.4, server 13.0)
WARNING: psql major version 11, server major version 13.
  Some psql features might not work.
Type "help" for help.

sisis=# \d dbctest
FEHLER:  Spalte c.relhasoids existiert nicht
LINE 1: ...riggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoi...
  ^
sisis=#

Is this expected? Are there other bigger issues with commands. I see the
message "Some psql features might not work.", but that this affects also
"normal" commands of daily use...



https://www.postgresql.org/docs/current/app-psql.html
"
Meta-Commands

Anything you enter in psql that begins with an unquoted backslash is a 
psql meta-command that is processed by psql itself. These commands make 
psql more useful for administration or scripting. Meta-commands are 
often called slash or backslash commands.

"


\ commands are a psql feature, so they are potentially affected.



Thanks

matthias




--
Adrian Klaver
adrian.kla...@aklaver.com




json_to_recordset() and CTE performance

2020-10-21 Thread Matt DeLuco
Hello,

I’ve written an sql function that takes a single JSON parameter, in this case 
an array of objects each with eight properties (example below.) This function 
uses json_to_recordset() in a CTE to insert three rows on two tables. It takes 
nearly 7 minutes to insert my dataset of 8935 records (in JSON), using a small 
python script (the script parses the JSON in < 1s) and PostgreSQL 13 on my 
Macbook Air.

As an experiment I wrote a second sql function with 8 parameters representing 
each property of the JSON objects, requiring a discrete function call per 
record. I wrote a python script to insert all 8935 records (making as many 
calls to the sql function) which executed in around 2 minutes.

I’m very much a novice at interpreting EXPLAIN (ANALYZE) and hoping someone can 
help me better optimize my original function. Both the function and results of 
explain/analyze are provided below. Is it perhaps a limitation of CTEs or 
json_to_recordset(), and an entirely different approach is necessary (like the 
second function I wrote with one call per record?) I was really hoping to make 
this work, I’ve written a small API to my database using sql and plpgsql 
functions each taking a single JSON parameter, and my web backend acts almost 
as an http proxy to the database. It’s a different way of doing things as far 
as webdev goes, but (for me) an interesting experiment. I like PostgreSQL and 
really want to take advantage of it.

I’ve tried a few “optimizations”. Removing the final SELECT and returning VOID 
saves around 1.5 min. Removing some extra JOINs saves a little time but nothing 
substantial (the joins against account.t_account in some places are to check 
“ownership” of the record by a given client_id.)

Using a subquery seems like it could be much faster than a CTE, but I don’t 
know how to insert three rows on two tables using a subquery.

Any advice is appreciated, thank you in advance.

Matt


INPUT:
(The CTE illustrates all the properties and types)

[
  {
“bank_account_id”: 1324,
“transaction_id”: “abc123”,
“transaction_date”: “2020-10-20”,
…
  },
  …
]

OUTPUT:
(Not sure what I’ve done to create the nested arrays, but it’s unnecessary..
I can shave off ~1.5min by returning VOID.)

[
  [
{
  "id": 250185
},
{
  "id": 250186
},
...
  ]
]

FUNCTION:

CREATE OR REPLACE FUNCTION journal.create_with_categories(in_json JSON)
RETURNS JSON
AS $$

WITH data AS (
SELECT
(in_json#>>'{context, client_id}')::BIGINT AS client_id,
nextval(pg_get_serial_sequence('journal', 'id')) AS journal_id,
bank_transaction_type,
x.*
FROM json_to_recordset(in_json->'data')
AS x (
bank_account_id BIGINT,
transaction_id TEXT,
transaction_date DATE,
posted_date DATE,
amount finance.monetary,
description TEXT,
parent_account_id BIGINT,
child_account_id BIGINT
),
LATERAL bank_account.get_transaction_type_by_id(x.bank_account_id, 
x.amount) AS bank_transaction_type
),
insert_journal_entry AS (
INSERT INTO journal.journal (
client_id, id, bank_account_id,
transaction_id, transaction_date, posted_date,
description
)
SELECT
client_id, journal_id, bank_account_id,
transaction_id, transaction_date, posted_date,
description
FROM data
),
insert_account_entries AS (
INSERT INTO journal.account2journal(
account_id, journal_id, amount, type
)
-- T account
SELECT
t.id,
d.journal_id,
@ d.amount,
CASE WHEN d.bank_transaction_type = 'debit'::transaction_type
THEN 'credit'::transaction_type
ELSE 'debit'::transaction_type
END
FROM data d
LEFT JOIN account.t_account t
ON (t.id = COALESCE(d.child_account_id, d.parent_account_id))
WHERE t.client_id = d.client_id OR t.id IS NULL

UNION ALL

-- bank account
SELECT
t.id, d.journal_id, @ d.amount, d.bank_transaction_type
FROM data d
JOIN bank_account.bank_account b
ON (b.id = d.bank_account_id)
JOIN account.t_account t
ON (t.id = b.t_account_id)
WHERE
t.client_id = d.client_id
)
SELECT json_agg(d) FROM (SELECT d.journal_id AS id FROM data AS d) AS d;

$$ LANGUAGE sql;


EXPLAIN ANALYZE:
(From logs)

Aggregate  (cost=24.24..24.25 rows=1 width=32) (actual 
time=388926.249..388926.371 rows=1 loops=1)
  Buffers: shared hit=53877 dirtied=2
  CTE data
->  Nested Loop  (cost=0.26..4.76 rows=100 width=148) (actual 
time=183.906..388716.550 rows=8935 loops=1)
  Buffers: shared hit=53877 dirtied=2
  ->  Function Scan

Re: temp table same name real table

2020-10-21 Thread David G. Johnston
Moving this over to -hackers and the commitfest

https://www.postgresql.org/message-id/CAKFQuwaZjsaQ9KdDKjcwgGo3axg2t0BdBVYJiHf%3DH9nbvsxQbQ%40mail.gmail.com

David J.


On Thu, Oct 1, 2020 at 5:14 PM David G. Johnston 
wrote:

> On Wed, Sep 30, 2020 at 7:41 AM Tom Lane  wrote:
>
>> If you really really need to do this, I'd counsel using EXECUTE to
>> ensure no caching happens.  But I concur with Michael that it's
>> fundamentally a bad idea.
>>
>
> Agreed, though the documentation seems a bit loose here.  The fact that
> the temp table hides the permanent one is a side-effect of pg_temp being
> placed first in the default search_path.  If it is explicitly placed last
> the permanent table would be found again.
>
> Adding a reminder that search_path searching happens only during new plan
> creation (even if we don't generally cover caching implementation in
> detail, though I didn't look around for this one) seems like a good value.
>
> I propose the following:
>
> diff --git a/doc/src/sgml/ref/create_table.sgml
> b/doc/src/sgml/ref/create_table.sgml
> index 087cad184c..a400334092 100644
> --- a/doc/src/sgml/ref/create_table.sgml
> +++ b/doc/src/sgml/ref/create_table.sgml
> @@ -171,8 +171,9 @@ WITH ( MODULUS  class="parameter">numeric_literal, REM
>If specified, the table is created as a temporary table.
>Temporary tables are automatically dropped at the end of a
>session, or optionally at the end of the current transaction
> -  (see ON COMMIT below).  Existing permanent
> -  tables with the same name are not visible to the current session
> +  (see ON COMMIT below).  The default
> +  search_path includes the temporary schema first and so identically
> +  named existing permanent tables are not chosen for new plans
>while the temporary table exists, unless they are referenced
>with schema-qualified names. Any indexes created on a temporary
>table are automatically temporary as well.
>
> David J.
>
>


Re: using psql 11.4 with a server 13.0 && meta commands

2020-10-21 Thread Ireneusz Pluta/wp.pl

W dniu 2020-10-21 o 13:54, Matthias Apitz pisze:

Hello,

I've found the following problem using psql 11.4 against a server 13.0:

you may install a separate 13.0 psql binary and use that one against the 13.0 
server




Re: json_to_recordset() and CTE performance

2020-10-21 Thread Michael Lewis
Version? What is the value for work_mem and other configs that are
non-default? I see some estimates that are rather off like -

->  Nested Loop  (cost=0.26..4.76 rows=100 width=148) (actual
time=183.906..388716.550 rows=8935 loops=1)
  Buffers: shared hit=53877 dirtied=2
  ->  Function Scan on json_to_recordset x
(cost=0.01..1.00 *rows=100* width=128) (actual time=130.645..142.316
*rows=8935* loops=1)
  ->  Function Scan on get_transaction_type_by_id
bank_transaction_type  (cost=0.25..0.26 rows=1 width=4) (actual
time=0.154..0.156 rows=1 loops=8935)
Buffers: shared hit=18054

Sometimes putting data into a temp table and analyzing it can be rather
helpful to ensure the planner has statistics on the number of records,
ndistinct, most common values, etc. I would try doing that with the result
of json_to_recordset and skipping the function call
to get_transaction_type_by_id until later, just to see how it performs.

That said, it seems like a hardcoded estimate from json_to_recordset is 100
perhaps. I haven't checked source code, but I know when defining a set
returning function, there is a ROWS option which provides the planner a
static value to assume will come out of that function so it would make
sense perhaps.


Re: json_to_recordset() and CTE performance

2020-10-21 Thread Matt DeLuco
PostgreSQL 13.0.

You’d have to be specific about the configs you’re looking for, I’m using 
Postgres.app (postgresapp.com ) and am uncertain if 
it’s distributed with non-default configs.

But, a quick grep shows these items that are configured:
max_wal_size = 1GB
min_wal_size = 80MB
shared_buffers = 128MB

work_mem is not configured so presumably it’s the default of 4MB.

I’ll try the temp tables. That seems familiar to what I found searching online 
- are you suggesting that as a permanent solution, or just as a means to better 
analyze performance?

Thanks,

Matt

> On Oct 21, 2020, at 1:25 PM, Michael Lewis  wrote:
> 
> Version? What is the value for work_mem and other configs that are 
> non-default? I see some estimates that are rather off like -
> 
> ->  Nested Loop  (cost=0.26..4.76 rows=100 width=148) (actual 
> time=183.906..388716.550 rows=8935 loops=1)
>   Buffers: shared hit=53877 dirtied=2
>   ->  Function Scan on json_to_recordset x  (cost=0.01..1.00 
> rows=100 width=128) (actual time=130.645..142.316 rows=8935 loops=1)
>   ->  Function Scan on get_transaction_type_by_id 
> bank_transaction_type  (cost=0.25..0.26 rows=1 width=4) (actual 
> time=0.154..0.156 rows=1 loops=8935)
> Buffers: shared hit=18054
> 
> Sometimes putting data into a temp table and analyzing it can be rather 
> helpful to ensure the planner has statistics on the number of records, 
> ndistinct, most common values, etc. I would try doing that with the result of 
> json_to_recordset and skipping the function call to 
> get_transaction_type_by_id until later, just to see how it performs.
> 
> That said, it seems like a hardcoded estimate from json_to_recordset is 100 
> perhaps. I haven't checked source code, but I know when defining a set 
> returning function, there is a ROWS option which provides the planner a 
> static value to assume will come out of that function so it would make sense 
> perhaps.



How to get debuginfo from building source code

2020-10-21 Thread Hou, Zhijie
Hi

My PostgreSQL 10.3 in Centos7.4 exited abnormally and generated core.dump.

I want to analyze the core.dump but I does not find postgresql10-debuginfo-10.3 
in
https://download.postgresql.org/pub/repos/yum/debug/10/redhat/rhel-7.4-x86_64/

Can I get the debuginfo from building the source code ?
If so, What compile options should I use ?

Best regards,
houzj






Re: How to get debuginfo from building source code

2020-10-21 Thread Ian Lawrence Barwick
2020年10月22日(木) 11:51 Hou, Zhijie :
>
> Hi
>
> My PostgreSQL 10.3 in Centos7.4 exited abnormally and generated core.dump.
>
> I want to analyze the core.dump but I does not find 
> postgresql10-debuginfo-10.3 in
> https://download.postgresql.org/pub/repos/yum/debug/10/redhat/rhel-7.4-x86_64/
>
> Can I get the debuginfo from building the source code ?
> If so, What compile options should I use ?

See instructions here:

  https://wiki.postgresql.org/wiki/Developer_FAQ#Compile-time


Regards

Ian Barwick

-- 
EnterpriseDB: https://www.enterprisedb.com




Hot backup in PostgreSQL

2020-10-21 Thread W.P.

Hi there,

how to do "hot backup" (copying files) while database running?

Not using pg_dump.

Is there some equivalent of Oracle (I knew v8) "ALTER DATABASE BEGIN 
BACKUP", which froze writes to database files, pushing everything to 
redo files?



Laurent.





Re: Hot backup in PostgreSQL

2020-10-21 Thread Paul Förster
Hi Laurent,

> On 22. Oct, 2020, at 08:42, W.P.  wrote:
> 
> Hi there,
> 
> how to do "hot backup" (copying files) while database running?
> 
> Not using pg_dump.
> 
> Is there some equivalent of Oracle (I knew v8) "ALTER DATABASE BEGIN BACKUP", 
> which froze writes to database files, pushing everything to redo files?

yes, there is. Please read the chapter: 25.3.3.1. Making A Non-Exclusive 
Low-Level Backup

https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP

Cheers,
Paul