Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Dominique Devienne
On Thu, Mar 16, 2023 at 6:48 AM Laurenz Albe 
wrote:

> As we wrote, some of us think that cursors are useful, and we tried to
> explain why we think that.  If you don't think that cursors are useful,
> don't use them.  We are not out to convince you otherwise.
>

Perhaps OT (I only skimed this thread) but when I compared Cursors to
regular Statements / Queries
from a LIBPQ client application perspective, on the same "streamable"
queries (i.e. w/o a sort), Cursor
shined in terms of time-to-first-row, compared to waiting for the whole
ResultSet, but getting the full result
OTOH was 2x as long with Cursor, compared to the regular SELECT Statement.

Thus in my mind, it really depends on what you value in a particular
situation, latency or throughput. --DD

PS: In my testing, I used forward-only cursors
PPS: I don't recall the ResultSet cardinality or byte size, nor the
batching used with the Cursor.


Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Pavel Stehule
čt 16. 3. 2023 v 9:18 odesílatel Dominique Devienne 
napsal:

> On Thu, Mar 16, 2023 at 6:48 AM Laurenz Albe 
> wrote:
>
>> As we wrote, some of us think that cursors are useful, and we tried to
>> explain why we think that.  If you don't think that cursors are useful,
>> don't use them.  We are not out to convince you otherwise.
>>
>
> Perhaps OT (I only skimed this thread) but when I compared Cursors to
> regular Statements / Queries
> from a LIBPQ client application perspective, on the same "streamable"
> queries (i.e. w/o a sort), Cursor
> shined in terms of time-to-first-row, compared to waiting for the whole
> ResultSet, but getting the full result
> OTOH was 2x as long with Cursor, compared to the regular SELECT Statement.
>
> Thus in my mind, it really depends on what you value in a particular
> situation, latency or throughput. --DD
>
>
cursors are optimized for minimal cost of first row, queries are optimized
for minimal cost of last row

Regards

Pavel


> PS: In my testing, I used forward-only cursors
> PPS: I don't recall the ResultSet cardinality or byte size, nor the
> batching used with the Cursor.
>


Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Dominique Devienne
On Thu, Mar 16, 2023 at 9:23 AM Pavel Stehule 
wrote:

> čt 16. 3. 2023 v 9:18 odesílatel Dominique Devienne 
> napsal:
>
>> [...] depends on what you value in a particular situation, latency or
>> throughput. --DD
>>
>
> cursors are optimized for minimal cost of first row, queries are optimized
> for minimal cost of last row
>

That's a nice way to put it Pavel.

And to have it both ways, use COPY in binary protocol? That way the rows
are streamed
to you in arbitrary chunks as soon as available (I hope), and the burden is
on you the
client to decode and use those rows in parallel as they are "streamed" to
you.

I've yet to test that (thus the 'i hope' above). I used COPY binary for
INSERTs,
and COPY text/json for SELECTs, not yet COPY binary for SELECTs. I'm hoping
the latency of COPY will be small compared to a regular SELECT where I have
to
wait for LIBPQ to assemble the whole ResultSet. Are my hopes unfounded? --DD


Re: pg_upgrade Only the install user can be defined in the new cluster

2023-03-16 Thread Daniel Gustafsson
> On 15 Mar 2023, at 16:39, Dávid Suchan  wrote:
> 
> It prints out:
>  count
> ---
>  1

I have a feeling the cluster you tried to upgrade to doesn't match this one, as
the check that failed will fail on values other than 1.  Did you create them
equally?  If you try to upgrade into this cluster, even just with the --check
option, does that yield more success?

--
Daniel Gustafsson





Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Pavel Stehule
čt 16. 3. 2023 v 9:55 odesílatel Dominique Devienne 
napsal:

> On Thu, Mar 16, 2023 at 9:23 AM Pavel Stehule 
> wrote:
>
>> čt 16. 3. 2023 v 9:18 odesílatel Dominique Devienne 
>> napsal:
>>
>>> [...] depends on what you value in a particular situation, latency or
>>> throughput. --DD
>>>
>>
>> cursors are optimized for minimal cost of first row, queries are
>> optimized for minimal cost of last row
>>
>
> That's a nice way to put it Pavel.
>
> And to have it both ways, use COPY in binary protocol? That way the rows
> are streamed
> to you in arbitrary chunks as soon as available (I hope), and the burden
> is on you the
> client to decode and use those rows in parallel as they are "streamed" to
> you.
>
> I've yet to test that (thus the 'i hope' above). I used COPY binary for
> INSERTs,
> and COPY text/json for SELECTs, not yet COPY binary for SELECTs. I'm hoping
> the latency of COPY will be small compared to a regular SELECT where I
> have to
> wait for LIBPQ to assemble the whole ResultSet. Are my hopes unfounded?
> --DD
>

COPY is a different creature - it has no execution plan, and it is not
interpreted by the executor.

Using COPY SELECT instead SELECT looks like premature optimization. The
performance benefit will be minimal (maybe there can be exceptions
depending on data, network properties or interface). Cursors, queries can
use binary protocol, if the client can support  it.

Regards

Pavel


Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Dominique Devienne
On Thu, Mar 16, 2023 at 11:15 AM Pavel Stehule 
wrote:

> čt 16. 3. 2023 v 9:55 odesílatel Dominique Devienne 
> napsal:
>
>> On Thu, Mar 16, 2023 at 9:23 AM Pavel Stehule 
>> wrote:
>>
>>> čt 16. 3. 2023 v 9:18 odesílatel Dominique Devienne 
>>> napsal:
>>>
 [...] depends on what you value in a particular situation, latency or
 throughput. --DD

>>>
>>> cursors are optimized for minimal cost of first row, queries are
>>> optimized for minimal cost of last row
>>>
>>
>> That's a nice way to put it Pavel.
>>
>> And to have it both ways, use COPY in binary protocol?
>>
>
> COPY is a different creature - it has no execution plan, and it is not
> interpreted by the executor.
>

OK. Not sure what that means exactly. There's still a SELECT, with possibly
WHERE clauses and/or JOINs, no?
Doesn't that imply an execution plan? I'm a bit confused.


> Using COPY SELECT instead SELECT looks like premature optimization.
>

Possible. But this is not an e-commerce web-site with a PostgreSQL backend
here.
This is classical client-server with heavy weight desktop apps loading
heavy weight data
(in number and size) from PostgreSQL. So performance (throughput) does
matter a lot to us.
And I measure that performance in both rows/sec and MB/sec, not (itsy
bitsy) transactions / sec.


> The performance benefit will be minimal ([...]).
>

COPY matters on INSERT for sure performance-wise.
So why wouldn't COPY matter for SELECTs too?


> Cursors, queries can use binary protocol, if the client can support  it.
>

I already do. But we need all the speed we can get.
In any case, I'll have to try and see/test for myself eventually.
We cannot afford to leave any performance gains on the table.


Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Pavel Stehule
čt 16. 3. 2023 v 11:52 odesílatel Dominique Devienne 
napsal:

> On Thu, Mar 16, 2023 at 11:15 AM Pavel Stehule 
> wrote:
>
>> čt 16. 3. 2023 v 9:55 odesílatel Dominique Devienne 
>> napsal:
>>
>>> On Thu, Mar 16, 2023 at 9:23 AM Pavel Stehule 
>>> wrote:
>>>
 čt 16. 3. 2023 v 9:18 odesílatel Dominique Devienne <
 ddevie...@gmail.com> napsal:

> [...] depends on what you value in a particular situation, latency or
> throughput. --DD
>

 cursors are optimized for minimal cost of first row, queries are
 optimized for minimal cost of last row

>>>
>>> That's a nice way to put it Pavel.
>>>
>>> And to have it both ways, use COPY in binary protocol?
>>>
>>
>> COPY is a different creature - it has no execution plan, and it is not
>> interpreted by the executor.
>>
>
> OK. Not sure what that means exactly. There's still a SELECT, with
> possibly WHERE clauses and/or JOINs, no?
> Doesn't that imply an execution plan? I'm a bit confused.
>
>
>> Using COPY SELECT instead SELECT looks like premature optimization.
>>
>
> Possible. But this is not an e-commerce web-site with a PostgreSQL backend
> here.
> This is classical client-server with heavy weight desktop apps loading
> heavy weight data
> (in number and size) from PostgreSQL. So performance (throughput) does
> matter a lot to us.
> And I measure that performance in both rows/sec and MB/sec, not (itsy
> bitsy) transactions / sec.
>
>
>> The performance benefit will be minimal ([...]).
>>
>
> COPY matters on INSERT for sure performance-wise.
> So why wouldn't COPY matter for SELECTs too?
>

Please, can you show some benchmarks :-) I don't believe it.

The protocol is already designed for massive reading by queries. If COPY
SELECT is significantly faster than SELECT, then some should be wrong on
some side (server or client).

Regards

Pavel


>
>> Cursors, queries can use binary protocol, if the client can support  it.
>>
>
> I already do. But we need all the speed we can get.
> In any case, I'll have to try and see/test for myself eventually.
> We cannot afford to leave any performance gains on the table.
>


Re: valgrind a background worker

2023-03-16 Thread Jon Erdman

On 2/10/23 9:08 PM, Jon Erdman wrote:
> On 2/10/23 3:05 PM, Tom Lane wrote:
>> Jeffrey Walton  writes:
>>> On Fri, Feb 10, 2023 at 10:04 AM Tom Lane  wrote:
 You have to valgrind the whole cluster AFAIK.  Basically, start
 the postmaster under valgrind with --trace-children=yes.
 For leak tracking you probably also want
 --leak-check=full --track-origins=yes --read-var-info=yes
>>
>>> One additional comment... the program in question and PostgreSQL
>>> should also be built with -g -O1 per
>>> https://valgrind.org/docs/manual/quick-start.html . Otherwise, there's
>>> a risk the line information will not be accurate or usable.
>>
>> Yeah.  Also, you need to compile Postgres with -DUSE_VALGRIND
>> if you want valgrind to have any idea about palloc/pfree.
> 
> Thanks much both of you! I'll report back how it goes ;)

FYI folks: once I got the build done properly (valgrind brew wouldn't 
install on OSX) on linux, I was able to find and fix my leaks. They were 
from not calling pfree on StringInfo.data.
-- 
Jon Erdman (aka StuckMojo)
      PostgreSQL Zealot





Aw: Re: Removing trailing zeros (decimal places) from a numeric (pre trim_scale()) with unexpected behaviour

2023-03-16 Thread magog002
Hello Erik,

many thanks for the feedback (Oracle) and the second option to get rid of the 
decimal separator character.
The case is closed.

Kind regards
Juergen


> Gesendet: Mittwoch, 15. März 2023 um 17:50 Uhr
> Von: "Erik Wienhold" 
> An: magog...@web.de, pgsql-general@lists.postgresql.org
> Betreff: Re: Removing trailing zeros (decimal places) from a numeric (pre 
> trim_scale()) with unexpected behaviour
>
> > On 15/03/2023 14:51 CET magog...@web.de wrote:
> >
> > I want to remove not needed decimal places / trailing zeros from a numeric.
> > I know this can be done starting PG >=13 with TRIM_SCALE(numeric) which 
> > would
> > solve my issue (with an additional CAST to TEXT at the end).  Unfortunately
> > the production database is still running with PostgreSQL 12.x and this is
> > something I currently can't change.
> >
> > So to get rid of the not needed decimal places I tried TO_CHAR(..., 
> > 'FM')
> > in combination with TRUNC() as shown below with examples. This does not 
> > remove
> > the decimal places separator if the complete scale digits are zero (60.000).
> 
> Cast the to_char result to numeric and then to text.  This will also remove
> trailing zeros.
> 
>   select
> to_char('60.000'::numeric, 'FM999.999')::numeric::text,
> to_char('60.100'::numeric, 'FM999.999')::numeric::text;
> 
>to_char | to_char
>   -+-
>60  | 60.1
>   (1 row)
> 
> > The current behaviour might be intentional but it 'smells like a bug' to me.
> 
> It follows Oracle's to_char behavior:
> 
>   select to_char('60.000', 'FM999.999') from dual;
> 
>   TO_CHAR('60.000','FM999.999')
>   -
>   60.
> 
> --
> Erik
> 
> 
>




RE: EXTERNAL: Re: "No Free extents", table using all allocated space but no rows!

2023-03-16 Thread Dolan, Sean
I messed up and confused issues.   The error is :  
ERROR: Could not extend pg_tblspc/16555/PG_13_20200//  No space left on 
device
HINT: Check free disk space

So the schema is "full" and the offender is this one table.   I can't TRUNCATE 
as there needs to be space to perform the action.   Is there a way to see if 
there is a transaction on that table like you allude to?

-Original Message-
From: Laurenz Albe  
Sent: Wednesday, March 15, 2023 11:45 PM
To: Dolan, Sean (US N-ISYS Technologies Inc.) ; 
pgsql-general@lists.postgresql.org
Subject: EXTERNAL: Re: "No Free extents", table using all allocated space but 
no rows!

On Thu, 2023-03-16 at 01:58 +, Dolan, Sean wrote:
> Environment: PostGres 13 on RedHat 7.9.
>  
> I am using logical replication (publisher/subscriber) between two 
> databases and there are times where one of our schemas gets to 100% of 
> allocated space (No Free Extents).
> I went into the schema and did a \dt+ to see the amount of size being 
> used and I could see one of the tables somehow shows 16GB, essentially the 
> amount of allocated size.
> Wanting to see what is in that table, I did a simple select * from the 
> table and it returns no rows.  Doing a count(*) also returns 0 rows.
>  
> How can the table be using all that space but there is nothing “in” the table?
> I don’t care about the data (as I am testing) so I can drop and recreate that 
> one table.
> \dt+ would then show 0 bytes.    Later, I will then create a 
> subscription and then I will get a No Free Extents error again and again the 
> table has filled up.
>  
> What can I look for?

I don't think that there is an error message "no free extents".

It can easily happen that a table is large, but SELECT count(*) returns 0.
That would mean that either the table is empty and VACUUM truncation didn't 
work, or that the table contains tuples that are not visible to your user, 
either because VACUUM didn't process the table yet, or because your snapshot is 
too old to see the data, or because the transaction that created the rows is 
still open.

If you don't care about the data, your easiest option is to TRUNCATE the table.
If TRUNCATE is blocked, kill all transactions that block it.

Yours,
Laurenz Albe


Re: Is the PL/pgSQL refcursor useful in a modern three-tier app?

2023-03-16 Thread Adrian Klaver

On 3/15/23 18:41, Bryn Llewellyn wrote:

adrian.kla...@aklaver.com  wrote:

I have a hard time fathoming why someone who writes documentation does 
not actually read documentation.


Ouch. In fact, I had read the whole of the "43.7. Cursors" section in 
the "PL/pgSQL" chapter (www.postgresql.org/docs/15/plpgsql-cursors.html 
). And the 
sections in the "SQL Commands" chapter for "declare", "fetch" and 
"close". But several of the key concepts didn't sink in and this 
prevented me not only from understanding what some of the examples 
showed but, worse, from being able to use the right vocabulary to 
express what confused me.


Given this from your original question:

" (Anyway, without anything like Oracle PL/SQL's packages, you have no 
mechanism to hold the opened cursor variable between successive server 
calls.)"



What part of this:

CREATE TABLE test (col text);
INSERT INTO test VALUES ('123');

CREATE FUNCTION reffunc(refcursor) RETURNS refcursor AS '
BEGIN
OPEN $1 FOR SELECT col FROM test;
RETURN $1;
END;
' LANGUAGE plpgsql;

BEGIN;
SELECT reffunc('funccursor');
FETCH ALL IN funccursor;
COMMIT;

did not make sense in that context?


The open portal instances in a particular session are listed in 
pg_cursors. (Why not pg_portals?) When the instance was created with the 


Why are tables also known as relations and you can look them up in 
pg_class or pg_tables?


Answer: It is the rules of the game.





create procedure s.p()
   set search_path = pg_catalog, pg_temp
   language plpgsql
as $body$
declare
   "My Refcursor" cursor for select k, v from s.t order by k;
begin
   open "My Refcursor";
*  raise info '%', pg_typeof("My Refcursor")::text;*
end;
$body$;

begin;
call s.p();
select name, statement from pg_cursors;
fetch forward 5 in "My Refcursor";
end;

(I included "pg_typeof()" just here to make the point that it reports 
"refcursor" and not the plain "cursor" that the declaration might lead 
you to expect. It reports "refcursor" in all the other PL/pgSQL examples 
too.


https://www.postgresql.org/docs/current/plpgsql-cursors.html

"All access to cursors in PL/pgSQL goes through cursor variables, which 
are always of the special data type refcursor. One way to create a 
cursor variable is just to declare it as a variable of type refcursor. 
Another way is to use the cursor declaration syntax, which in general is:


name [ [ NO ] SCROLL ] CURSOR [ ( arguments ) ] FOR query;
"

Again, I would like to know how that is confusing?




With all these variants (and there may be more), and with only some of 
the exemplified, I don't feel too stupid for getting confused.





Where you get confused is in moving the goal posts.

What starts out with:

"(Anyway, without anything like Oracle PL/SQL's packages, you have no 
mechanism to hold the opened cursor variable between successive server 
calls.)



Is it fair to say that the PL/pgSQL refcursor is useful, at best, only 
in very special use-cases?"


evolves into deep dive into all thing cursors.

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_upgrade Only the install user can be defined in the new cluster

2023-03-16 Thread Dávid Suchan
So I tried upgrading into this brand new cluster I created(using initdb -D 
/somedatapathichose). Running the upgrade with --check worked, it returned 
message that the clusters are identical. Then I stopped the new cluster, and 
ran the pg_upgrade without --check, which resulted in another:
Checking database user is the install user
Only the install user can be defined in the new cluster.
Failure, exiting
I dont know what to check for anymore, the log files dont say anything other 
than "Only the install user can be defined in the new cluster" when postgres is 
the install user everywhere.
The ' SELECT COUNT(*) FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_'; ' 
prints count 4 and ' SELECT rolname FROM pg_roles WHERE oid = 10; ' prints 
rolname postgres.

Od: Daniel Gustafsson 
Odoslané: štvrtok 16. marca 2023 10:28
Komu: Dávid Suchan 
Kópia: pgsql-gene...@postgresql.org 
Predmet: Re: pg_upgrade Only the install user can be defined in the new cluster

> On 15 Mar 2023, at 16:39, Dávid Suchan  wrote:
>
> It prints out:
>  count
> ---
>  1

I have a feeling the cluster you tried to upgrade to doesn't match this one, as
the check that failed will fail on values other than 1.  Did you create them
equally?  If you try to upgrade into this cluster, even just with the --check
option, does that yield more success?

--
Daniel Gustafsson



Re: pg_upgrade Only the install user can be defined in the new cluster

2023-03-16 Thread Tom Lane
=?Windows-1252?Q?D=E1vid_Suchan?=  writes:
> The ' SELECT COUNT(*) FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_'; ' 
> prints count 4

4?  That would be the problem all right.  What are those, that is what
do you get from

SELECT rolname FROM pg_catalog.pg_roles WHERE rolname !~ '^pg_';

on the new cluster?

regards, tom lane




Re: pg_upgrade Only the install user can be defined in the new cluster

2023-03-16 Thread Daniel Gustafsson
> On 16 Mar 2023, at 15:56, Dávid Suchan  wrote:
> 
> So I tried upgrading into this brand new cluster I created(using initdb -D 
> /somedatapathichose). Running the upgrade with --check worked, it returned 
> message that the clusters are identical. Then I stopped the new cluster, and 
> ran the pg_upgrade without --check, which resulted in another:
> Checking database user is the install user

pg_upgrade --check does not alter the new cluster in any way, so you must have
some form of processing on your system that configured the new cluster with
roles (and possibly other things) in between these runs.

--
Daniel Gustafsson





Re: EXTERNAL: Re: "No Free extents", table using all allocated space but no rows!

2023-03-16 Thread Laurenz Albe
On Thu, 2023-03-16 at 13:20 +, Dolan, Sean wrote:
> I messed up and confused issues.   The error is :  
> ERROR: Could not extend pg_tblspc/16555/PG_13_20200//  No space left 
> on device
> HINT: Check free disk space
> 
> So the schema is "full" and the offender is this one table.
> I can't TRUNCATE as there needs to be space to perform the action.
> Is there a way to see if there is a transaction on that table like you allude 
> to?

Ah, that's different.

If you don't have enough space to run TRUNCATE, and you don't feel like
extending the disk space, DROP TABLE would be a convenient alternative.

Yours,
Laurenz Albe




Many logical replication synchronization worker threads kicking off for ONE table

2023-03-16 Thread Dolan, Sean
The initial email below was the end result of something "run-away" in my 
logical replication.

PostGres 13.6, RedHat 7.9

Database A is the publisher; Database B is the subscriber.Within Database A 
are multiple schemas and the publication ensures that all the schemas and its 
tables are added.There is one table in particular that has 1.4million rows 
of data.   

I create a subscription on Database B and can see in the log:
LOG: logical replication table synchronization for subscription 
"sub_to_dbaseA", table "alert_history" has started.
CONTEXT:  COPY alert_history line 6668456
LOG: logical replication table synchronization for subscription 
"sub_to_dbaseA", table "alert_history" has started.
CONTEXT:  COPY alert_history line 5174606
LOG: logical replication table synchronization for subscription 
"sub_to_dbaseA", table "alert_history" has started.
CONTEXT:  COPY alert_history line 4325283

Normally I would see a line for "finished", but I never do.

I then actively watch the schema/table and do \dt+   and can see that table 
grow in 2GB increments until I fill up the entire drive and run out of room for 
that schema.

I am NOT getting any "checkpoints are occurring too frequently"As I have 
updated my WAL size:
max_wal_size=4GB
min_wal_size=1GB

Is the system having trouble synching this amount of data in a quick fashion 
and therefore kicks off more synchronization threads?   Anything I can do to 
prevent this?

Thank you



-Original Message-
From: Laurenz Albe  
Sent: Thursday, March 16, 2023 1:25 PM
To: Dolan, Sean (US N-ISYS Technologies Inc.) ; 
pgsql-general@lists.postgresql.org
Subject: EXTERNAL: Re: EXTERNAL: Re: "No Free extents", table using all 
allocated space but no rows!

On Thu, 2023-03-16 at 13:20 +, Dolan, Sean wrote:
> I messed up and confused issues.   The error is :  
> ERROR: Could not extend pg_tblspc/16555/PG_13_20200//  No space left 
> on device
> HINT: Check free disk space
> 
> So the schema is "full" and the offender is this one table.
> I can't TRUNCATE as there needs to be space to perform the action.
> Is there a way to see if there is a transaction on that table like you allude 
> to?

Ah, that's different.

If you don't have enough space to run TRUNCATE, and you don't feel like
extending the disk space, DROP TABLE would be a convenient alternative.

Yours,
Laurenz Albe


src/test/examples/testlibpq2.c where the HAVE_SYS_SELECT_H is defined.

2023-03-16 Thread jian he
Hi,
playing around with $[0] testlibpq2.c example. I wondered where
HAVE_SYS_SELECT_H is defined?

I searched on the internet, founded that people also asked the same
question in $[1].

In my machine, I do have .
system version: Ubuntu 22.04.1 LTS
gcc version: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
gcc compile command: gcc pg_testlibpq2.c -I/home/jian/postgres/pg16/include
\
-L/home/jian/postgres/pg16/lib -lpq

[0]https://www.postgresql.org/docs/current/libpq-example.html
[1]
https://stackoverflow.com/questions/37876850/in-compilation-time-how-to-find-the-macro-is-defined-in-which-header-file