Performance hit if I create multiple DBs on same instance

2020-11-24 Thread David Gauthier
Hi:

11.3 on linux

I've come up with a plan to archive data from my main DB which involves
creating other DBs on the same server.  But even though there will be zero
activity on the archive DBs in terms of insert/update/delete, and almost no
activity in terms of select, I'm still worried that the mere existence of
these other DBs will steal resources away from the instance and degrade
performance in my main DB. So my question is whether or not that worry is
valid or not.

Thanks in Advance for any help !


Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread David G. Johnston
On Tue, Nov 24, 2020 at 7:36 AM David Gauthier 
wrote:

> Hi:
>
> 11.3 on linux
>
> I've come up with a plan to archive data from my main DB which involves
> creating other DBs on the same server.  But even though there will be zero
> activity on the archive DBs in terms of insert/update/delete, and almost no
> activity in terms of select, I'm still worried that the mere existence of
> these other DBs will steal resources away from the instance and degrade
> performance in my main DB. So my question is whether or not that worry is
> valid or not.
>
> Thanks in Advance for any help !
>

As long as you use the same cluster there shouldn't be any material
difference between having multiple tables in the same database and those
tables existing in another database in the same cluster.  The cluster-level
resources are the same in either case and have the same number of objects
to worry about.

David J.


Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread Adrian Klaver

On 11/24/20 6:36 AM, David Gauthier wrote:

Hi:

11.3 on linux

I've come up with a plan to archive data from my main DB which involves 
creating other DBs on the same server.  But even though there will be 
zero activity on the archive DBs in terms of insert/update/delete, and 
almost no activity in terms of select, I'm still worried that the mere 
existence of these other DBs will steal resources away from the instance 
and degrade performance in my main DB. So my question is whether or not 
that worry is valid or not.


The primary resource I see they taking is storage. If you have adequate 
space so that the primary and archive databases can grow into it then I 
don't see a problem on that score. There will also be some additional 
overhead for the automatic VACUUM and ANALYZE operations. Since the 
archive databases are quiescent that will be on the order of monitoring 
not really processing.




Thanks in Advance for any help !



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread David Gauthier
Ok, thanks.

I was also planning on manually running vacuum, reindex and analyze on the
main DB after removing the data from the main DB after archiving.  Does
that sound necessary and reasonable ?

On Tue, Nov 24, 2020 at 10:15 AM Adrian Klaver 
wrote:

> On 11/24/20 6:36 AM, David Gauthier wrote:
> > Hi:
> >
> > 11.3 on linux
> >
> > I've come up with a plan to archive data from my main DB which involves
> > creating other DBs on the same server.  But even though there will be
> > zero activity on the archive DBs in terms of insert/update/delete, and
> > almost no activity in terms of select, I'm still worried that the mere
> > existence of these other DBs will steal resources away from the instance
> > and degrade performance in my main DB. So my question is whether or not
> > that worry is valid or not.
>
> The primary resource I see they taking is storage. If you have adequate
> space so that the primary and archive databases can grow into it then I
> don't see a problem on that score. There will also be some additional
> overhead for the automatic VACUUM and ANALYZE operations. Since the
> archive databases are quiescent that will be on the order of monitoring
> not really processing.
>
> >
> > Thanks in Advance for any help !
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread Bruce Momjian
On Tue, Nov 24, 2020 at 10:33:46AM -0500, David Gauthier wrote:
> Ok, thanks. 
> 
> I was also planning on manually running vacuum, reindex and analyze on the 
> main
> DB after removing the data from the main DB after archiving.  Does that sound
> necessary and reasonable ?

This blog entry summarizes the various levels of isolation and their
benefits:

https://momjian.us/main/blogs/pgblog/2012.html#April_23_2012

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread Adrian Klaver

On 11/24/20 7:33 AM, David Gauthier wrote:

Ok, thanks.

I was also planning on manually running vacuum, reindex and analyze on 
the main DB after removing the data from the main DB after archiving.  
Does that sound necessary and reasonable ?


Sounds reasonable.



On Tue, Nov 24, 2020 at 10:15 AM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 11/24/20 6:36 AM, David Gauthier wrote:
 > Hi:
 >
 > 11.3 on linux
 >
 > I've come up with a plan to archive data from my main DB which
involves
 > creating other DBs on the same server.  But even though there
will be
 > zero activity on the archive DBs in terms of
insert/update/delete, and
 > almost no activity in terms of select, I'm still worried that the
mere
 > existence of these other DBs will steal resources away from the
instance
 > and degrade performance in my main DB. So my question is whether
or not
 > that worry is valid or not.

The primary resource I see they taking is storage. If you have adequate
space so that the primary and archive databases can grow into it then I
don't see a problem on that score. There will also be some additional
overhead for the automatic VACUUM and ANALYZE operations. Since the
archive databases are quiescent that will be on the order of monitoring
not really processing.

 >
 > Thanks in Advance for any help !


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Types adaptation in psycopg3

2020-11-24 Thread Daniele Varrazzo
Hello,

I wrote a description of the psycopg3 adaptation system and the main
differences compared to psycopg2: available at
https://www.psycopg.org/articles/2020/11/24/psycopg3-adaptation/

Initial API docs are available at
https://www.psycopg.org/psycopg3/docs/adaptation.html

Feedback is welcome. Cheers!

-- Daniele




Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread Ron

On 11/24/20 8:36 AM, David Gauthier wrote:

Hi:

11.3 on linux

I've come up with a plan to archive data from my main DB which involves 
creating other DBs on the same server.  But even though there will be zero 
activity on the archive DBs in terms of insert/update/delete, and almost 
no activity in terms of select, I'm still worried that the mere existence 
of these other DBs will steal resources away from the instance and degrade 
performance in my main DB.


*Why?*  If the data is just sitting there like a dead lump, it's not using 
any RAM or CPU cycles... And if you're afraid of autovacuum and autoanalyze 
stealing resources, then disable them (at the table level).


--
Angular momentum makes the world go 'round.


libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread Dave Greeko
 Dear All,
I am having a hard time figuring out how prepare and execute a Prepared 
Statement with an "IN" operator in the WHERE clause using libpq. The total 
elements that will be passed to IN operator is dynamic and varied at runtime.
here is an example query:

select payload_id,ptime,frequency from codecs where tag IN (‘G729’,’GSM’);

The number of elements to filter for with this particular example is 2 but this 
varies at runtime (between 1 to 127 elements).
I would like to know what’s the proper syntax of the (char *query) parameter 
when calling the PQprepare() function.
I tried “select payload_id,ptime,frequency from codecs where tag=ANY(?)” but 
failed

Regards,
Dave
  

Re: libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread David G. Johnston
On Tue, Nov 24, 2020 at 12:14 PM Dave Greeko  wrote:

> Dear All,
> I am having a hard time figuring out how prepare and execute a Prepared
> Statement with an "IN" operator in the WHERE clause using libpq. The total
> elements that will be passed to IN operator is dynamic and varied at
> runtime.
> here is an example query:
>
> select payload_id,ptime,frequency from codecs where tag IN (‘G729’,’GSM’);
>
> The number of elements to filter for with this particular example is 2 but
> this varies at runtime (between 1 to 127 elements).
> I would like to know what’s the proper syntax of the (char *query)
> parameter when calling the PQprepare() function.
> I tried “select payload_id,ptime,frequency from codecs where tag=ANY(?)”
> but failed
>
>
ANY wants an array - you can either pass an array input literal and do
"?::text[]" or you can pass a probably easier to write "csv" value and
write "ANY(string_to_array(?, ','))".

David J.


Re: libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread Dave Greeko
 Hi David,
I tried both and I am getting syntax error.

char *query="select codec_id,fs_name,pt from codec_defs where pt = 
ANY(string_to_array(?, ','))";
OR
char *query="select codec_id,fs_name,pt from codec_defs where pt = 
ANY(?::text)";

PGresult *res=PQprepare(conn,"codecs",query,1,NULL);


Dave,
 On Tuesday, November 24, 2020, 11:28:07 AM PST, David G. Johnston 
 wrote:  
 
 On Tue, Nov 24, 2020 at 12:14 PM Dave Greeko  wrote:

 Dear All,
I am having a hard time figuring out how prepare and execute a Prepared 
Statement with an "IN" operator in the WHERE clause using libpq. The total 
elements that will be passed to IN operator is dynamic and varied at runtime.
here is an example query:

select payload_id,ptime,frequency from codecs where tag IN (‘G729’,’GSM’);

The number of elements to filter for with this particular example is 2 but this 
varies at runtime (between 1 to 127 elements).
I would like to know what’s the proper syntax of the (char *query) parameter 
when calling the PQprepare() function.
I tried “select payload_id,ptime,frequency from codecs where tag=ANY(?)” but 
failed



ANY wants an array - you can either pass an array input literal and do 
"?::text[]" or you can pass a probably easier to write "csv" value and write 
"ANY(string_to_array(?, ','))".
David J.
  

Re: libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread Tom Lane
Dave Greeko  writes:
> I tried both and I am getting syntax error.

> char *query="select codec_id,fs_name,pt from codec_defs where pt = 
> ANY(string_to_array(?, ','))";
> OR
> char *query="select codec_id,fs_name,pt from codec_defs where pt = 
> ANY(?::text)";

> PGresult *res=PQprepare(conn,"codecs",query,1,NULL);

Well, your first problem is that "?" is not the parameter symbol
understood by libpq+backend.  Try "$1".  The other problem,
at least for the second version of that, is that you want to be
passing a text array not a single text value --- so it needs to
look more like "where pt = ANY($1::text[])".

regards, tom lane




What is the best way to get the current number of worker processes?

2020-11-24 Thread Paul Martinez
Hey, all,

As part of configuring the max_worker_processes parameter I would like to know
how many current worker processes are active, but I'm unsure of the best way
to do this.

pg_stat_activity has a column backend_type which normally takes one of the
following values:

autovacuum launcher
autovacuum worker
logical replication launcher
logical replication worker
parallel worker
background writer
client backend
checkpointer
startup
walreceiver
walsender
walwriter

But extensions can launch additional processes and set this to a custom value
as well. (pglogical, as an example, does this.)

To the best of my understanding, only processes with a backend type set to
"logical replication worker" or "parallel worker" count towards
max_worker_processes, as well as any processes started by an extension that
have backend_type set to any value other than the above values. I believe that,
despite having "worker" in the name, an "autovacuum worker" process does not
count towards max_worker_processes, correct?

So I believe the following query should return the number of processes that
count towards max_worker_processes.

SELECT COUNT(*)
FROM pg_catalog.pg_stat_activity
WHERE backend_type NOT IN (
  'autovacuum launcher',
  'autovacuum worker',
  'logical replication launcher',
  -- 'logical replication worker', -- This is a worker process!
  -- 'parallel worker',-- This is a worker process!
  'background writer',
  'client backend',
  'checkpointer',
  'startup',
  'walreceiver',
  'walsender',
  'walwriter'
);

Is there a better way to do this? Also, in 9.6 and earlier, pg_stat_activity
doesn't have a backend_type column, so I'm really unsure how to calculate
this in those older versions. I think pglogical is pretty much the only
extension that creates background workers on a regular basis, so I think I
can have a rough approximation by checking the application_name instead
for "pglogical", but I'd prefer a less hacky approach.



On a related note, can anyone verify that all processes that count towards
max_wal_senders will have backend_type set to "walsender", and all processes
that count towards max_connections will have backend_type set to "client
backend"?

I think it might be useful to have additional columns added to pg_stat_activity
that indicate which resource limits each process contributes to.

max_worker_processes   -> isworker
max_wal_senders-> iswalsender
max_connections-> isconn
superuser_reserved_connections -> issuconn

(You could then get the non-superuser connections via
  COUNT(*) FILTER (WHERE isconn) - COUNT(*) FILTER (WHERE issuconn)
.)

These two might also be useful, but I imagine using backend_type is pretty fool
proof for these:

max_logical_replication_workers -> islogrepworker
max_parallel_workers -> isparallel


Thanks,
Paul




Re: libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread Dave Greeko
 I am sorry I used different query in my my last reply and yes you are correct 
Tom. Using the $1 worked and the back-end indeed prepared the statement 
successfully but this will force me to do some work on the input array that 
contains the dynamic elements to comply with string_to_array delimiter when 
calling PQexecPrepared(). I would really like to just pass an array of filters 
of type (const char* const*) to PQexecPrepared's paramValues[] parameter 
instead of making it some sort of csv string. 


Here is a sample working code:

char *query="select codec_id,fs_name,pt from codec_defs where fs_name = 
ANY(string_to_array($1, ','))";
PGresult *res=PQprepare(conn,"codecs",query,1,NULL);
char *input[1] ={"G729,PCMU"};
PQclear(res);
res=PQexecPrepared(conn,"codecs",1,(const char* const*)input,NULL ,NULL,0);

//The second version( ANY($1::text[] ) 

char *query="select codec_id,fs_name,pt from codec_defs where fs_name = 
ANY($1::text[])";
//this gets prepared successfully
PGresult *res=PQprepare(conn,"codecs",query,1,NULL);

char *input[2] ={"G729","PCMU"};
PQclear(res);
//the below generate an erro:
res=PQexecPrepared(conn,"codecs",2,(const char* const*)input,NULL ,NULL,0);

//PQexecPrepared() generates Error:
ERROR: bind message supplies 2 parameters, but prepared statement "codecs" 
requires 1



 On Tuesday, November 24, 2020, 01:18:15 PM PST, Tom Lane 
 wrote:  
 
 Dave Greeko  writes:
> I tried both and I am getting syntax error.

> char *query="select codec_id,fs_name,pt from codec_defs where pt = 
> ANY(string_to_array(?, ','))";
> OR
> char *query="select codec_id,fs_name,pt from codec_defs where pt = 
> ANY(?::text)";

> PGresult *res=PQprepare(conn,"codecs",query,1,NULL);

Well, your first problem is that "?" is not the parameter symbol
understood by libpq+backend.  Try "$1".  The other problem,
at least for the second version of that, is that you want to be
passing a text array not a single text value --- so it needs to
look more like "where pt = ANY($1::text[])".

            regards, tom lane  

Re: libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread David G. Johnston
The convention here is to inline or bottom-post, not top-post.

On Tue, Nov 24, 2020 at 3:47 PM Dave Greeko  wrote:

> I would really like to just pass an array of filters of type (const char*
> const*) to PQexecPrepared's paramValues[] parameter instead of making it
> some sort of csv string.
> //The second version( ANY($1::text[] )
>
> char *query="select codec_id,fs_name,pt from codec_defs where fs_name =
> ANY($1::text[])";
> //this gets prepared successfully
> PGresult *res=PQprepare(conn,"codecs",query,1,NULL);
>
> char *input[2] ={"G729","PCMU"};
> PQclear(res);
> //the below generate an erro:
> res=PQexecPrepared(conn,"codecs",2,(const char* const*)input,NULL ,NULL,0);
>
> //PQexecPrepared() generates Error:
> ERROR: bind message supplies 2 parameters, but prepared statement "codecs"
> requires 1
>

I don't use the C API myself but a quick observation is that you specified
nParams=1 during prepare and nParams=2 during execute, so there is no way
it is going to work.  nParams=1 is correct for both - you must only pass a
single value to the backend, that value must be of type "text array"
(however one does that here).  Since you've chosen not to provide a data
type OID that would mean: "If paramTypes is NULL, or any particular element
in the array is zero, the server assigns a data type to the parameter
symbol in the same way it would do for an untyped literal string.".  As the
server will never assign an untyped literal string to be an array the best
you can do without an OID here is supply a literal that can be cast
directly to "text[]" (
https://www.postgresql.org/docs/current/arrays.html#ARRAYS-INPUT) - and
then write the cast into the sql query, all of which is probably only a
nominal difference from using "csv" and the string_to_array function.  Even
if you do specify whatever oid is "text[]" all you really avoid is the cast
in the query - the API is expecting text in the values argument, not an
"array object" (or the equivalent in C).  That said, if you can import a
client-side header that provides a helper function for this array-to-text
conversion (idk?) then passing the data becomes easier and you just need to
decide whether to add a cast in the SQL or provide the OID.

David J.


Re: libpq Prepared Statement with dynamic IN operator

2020-11-24 Thread Dave Greeko
 Many thanks for the clarification David. I wish there is a way without 
touching my input array but at least based on your feedback I can avoid the 
call to string_to_array call in the query. I tried it and it worked but I still 
have to permute the C array. Here is what I have working so far:


char *query="select codec_id,fs_name,pt from codec_defs where fs_name = 
ANY($1::text[])";
PGresult *res=PQprepare(conn,"codecs",query,1,NULL);

char *input[1] ={ "{G722,PCMU}" }; //this will be dynamic at run-time
res=PQexecPrepared(conn,"codecs",1,(const char* const*)input,NULL ,NULL,0);

Dave.

 On Tuesday, November 24, 2020, 04:13:06 PM PST, David G. Johnston 
 wrote:  
 
 The convention here is to inline or bottom-post, not top-post.
On Tue, Nov 24, 2020 at 3:47 PM Dave Greeko  wrote:

I would really like to just pass an array of filters of type (const char* 
const*) to PQexecPrepared's paramValues[] parameter instead of making it some 
sort of csv string. 
//The second version( ANY($1::text[] ) 

char *query="select codec_id,fs_name,pt from codec_defs where fs_name = 
ANY($1::text[])";
//this gets prepared successfully
PGresult *res=PQprepare(conn,"codecs",query,1,NULL);

char *input[2] ={"G729","PCMU"};
PQclear(res);
//the below generate an erro:
res=PQexecPrepared(conn,"codecs",2,(const char* const*)input,NULL ,NULL,0);

//PQexecPrepared() generates Error:
ERROR: bind message supplies 2 parameters, but prepared statement "codecs" 
requires 1

I don't use the C API myself but a quick observation is that you specified 
nParams=1 during prepare and nParams=2 during execute, so there is no way it is 
going to work.  nParams=1 is correct for both - you must only pass a single 
value to the backend, that value must be of type "text array" (however one does 
that here).  Since you've chosen not to provide a data type OID that would 
mean: "If paramTypes is NULL, or any particular element in the array is zero, 
the server assigns a data type to the parameter symbol in the same way it would 
do for an untyped literal string.".  As the server will never assign an untyped 
literal string to be an array the best you can do without an OID here is supply 
a literal that can be cast directly to "text[]" 
(https://www.postgresql.org/docs/current/arrays.html#ARRAYS-INPUT) - and then 
write the cast into the sql query, all of which is probably only a nominal 
difference from using "csv" and the string_to_array function.  Even if you do 
specify whatever oid is "text[]" all you really avoid is the cast in the query 
- the API is expecting text in the values argument, not an "array object" (or 
the equivalent in C).  That said, if you can import a client-side header that 
provides a helper function for this array-to-text conversion (idk?) then 
passing the data becomes easier and you just need to decide whether to add a 
cast in the SQL or provide the OID.
David J.
  

archive file "00000001000000000000006F" has wrong size: 67118648 instead of 16777216

2020-11-24 Thread 江川潔
Hi,

WAL log recovery was failed on wrong log record size. Could you please
advise me what is wrong in the setting ? Any suggestions will be highly
appreciated.

Thanks,
Kiyoshi

postgres.conf:
wal_level = replica
archive_mode = on
archive_command = 'copy "%p" "D:\\BKUP\\pg_archivedir\\PostgreSQL_DEV\\%f"'

recover.conf:
restore_command = 'copy "D:\\BKUP\\pg_archivedir\\PostgreSQL_DEV" "%p"'

C:\Users\tkgsys>pg_ctl -D D:\PostgreSQL_RCV start
サーバの起動完了を待っています2020-11-25 10:12:22.330 JST [7572] LOG:  listening on
IPv6 address "::", port 5434
2020-11-25 10:12:22.331 JST [7572] LOG:  listening on IPv4 address
"0.0.0.0", port 5434
2020-11-25 10:12:22.393 JST [7792] LOG:  database system was interrupted;
last known up at 2020-11-24 11:01:00 JST
2020-11-25 10:12:22.858 JST [7792] LOG:  starting archive recovery
D:\BKUP\pg_archivedir\PostgreSQL_DEV\00010056.0028.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001005A.0028.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001005D.0028.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\00010060.0060.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\00010064.0060.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\00010068.0028.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006B
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006B.0060.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006C
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006D
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006E
.D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006F
D:\BKUP\pg_archivedir\PostgreSQL_DEV\0001006F.0060.backup
D:\BKUP\pg_archivedir\PostgreSQL_DEV\00010070
1 個のファイルをコピーしました。
2020-11-25 10:12:23.569 JST [7792] FATAL:  archive file
"0001006F" has wrong size: 67118648 instead of 16777216
2020-11-25 10:12:23.571 JST [7572] LOG:  startup process (PID 7792) exited
with exit code 1
2020-11-25 10:12:23.572 JST [7572] LOG:  aborting startup due to startup
process failure
2020-11-25 10:12:23.576 JST [7572] LOG:  database system is shut down
 待機処理が停止されました
pg_ctl: サーバを起動できませんでした。
ログ出力を確認してください。

C:\Users\tkgsys>


Re: archive file "00000001000000000000006F" has wrong size: 67118648 instead of 16777216

2020-11-24 Thread Michael Paquier
On Wed, Nov 25, 2020 at 11:01:37AM +0900, 江川潔 wrote:
> Hi,
> 
> WAL log recovery was failed on wrong log record size. Could you please
> advise me what is wrong in the setting ? Any suggestions will be highly
> appreciated.


> 2020-11-25 10:12:23.569 JST [7792] FATAL:  archive file
> "0001006F" has wrong size: 67118648 instead of 16777216

Something is fishy with your WAL archive partition, so you had better
check its state, by for example using pg_waldump on each segment
archived.  This error means that the file copied was basically 64MB in
size, while these are expected to be 16MB in this instance.
--
Michael


signature.asc
Description: PGP signature