Re: FTS and tri-grams

2021-01-06 Thread Artur Zakirov
On Tue, Jan 5, 2021 at 10:26 PM Mark Phillips
 wrote:
> We now wonder if tri-gram alone can achieve a fast full text search result. 
> If so, then we could the tsvector column be dropped?
>
> 1. Is FTS required for tri-gram to work?
> 2. Are these independent of each other?

FTS and tri-gram are independent features. You can implement them
independently, but it might depend on your use case.

> 3. Is tri-gram alone sufficient for a “full text search” feature?

It isn't sufficient if you want exact and/or phrase search.

-- 
Artur




Re: How to convert escaped text column - force E prefix

2021-01-06 Thread Pavel Stehule
Hi

st 6. 1. 2021 v 8:55 odesílatel Durumdara  napsal:

> Dear Members!
>
> A web developer stores JSON like strings in a text column.
>
> With E prefix we can get the real text:
>
> Select E'Az ad\u00f3kulcsonk\u00e9nti'
>
> Hungarian: "Az adókulcsonkénti" (ISO-8859-2)
>
> How to get the same result from a table column?
>
> select WhatAFunction( ATable.JSONLikeTextColumn) from ATable
>
> What function do we need to use to "simulate" E prefix?
>

Currently there are not any functions that you need. You need to write your
own.

CREATE OR REPLACE FUNCTION public.unistr(text)
 RETURNS text
 LANGUAGE plpgsql
 IMMUTABLE STRICT
AS $function$
declare r text;
begin
  execute 'select e''' || $1 ||  into r;
  return r;
end;
$function$;

Attention: This is ugly and possible sql injection vulnerable!!! But there
is not another way. The fix is in queue

https://commitfest.postgresql.org/31/2613/

Regards

Pavel




> Thank you for the help!
>
> Best regards
> dd
>
>
>
>
>
>
>
>


Re: How to convert escaped text column - force E prefix

2021-01-06 Thread Gavan Schneider

On 6 Jan 2021, at 19:43, Pavel Stehule wrote:

Currently there are not any functions that you need. You need to write 
your

own.


CREATE OR REPLACE FUNCTION public.unistr(text)
  RETURNS text
  LANGUAGE plpgsql
  IMMUTABLE STRICT
 AS $function$
 declare r text;
 begin
   execute 'select e''' || quote_literal($1) ||  into r;
   return r;
 end;
 $function$;


Attention: This is ugly and possible sql injection vulnerable!!! But 
there

is not another way. The fix is in queue


https://www.postgresql.org/docs/current/functions-string.html
quote_literal ( text ) → text
	  Returns the given string suitably quoted to be used as a string 
literal in an SQL statement string.

  Embedded single-quotes and backslashes are properly doubled.
	  Note that quote_literal returns null on null input; if the argument 
might be null, quote_nullable is often more suitable.

  See also Example 42.1. quote_literal(E'O\'Reilly') → 'O''Reilly'

It is even more ugly but would it at least help with the SQL injection 
risk?


Gavan Schneider
——
Gavan Schneider, Sodwalls, NSW, Australia
Explanations exist; they have existed for all time; there is always a 
well-known solution to every human problem — neat, plausible, and 
wrong.

— H. L. Mencken, 1920


Re: How to convert escaped text column - force E prefix

2021-01-06 Thread Pavel Stehule
st 6. 1. 2021 v 10:54 odesílatel Gavan Schneider 
napsal:

> On 6 Jan 2021, at 19:43, Pavel Stehule wrote:
>
> Currently there are not any functions that you need. You need to write
> your
> own.
>
> CREATE OR REPLACE FUNCTION public.unistr(text)
>   RETURNS text
>   LANGUAGE plpgsql
>   IMMUTABLE STRICT
>  AS $function$
>  declare r text;
>  begin
>execute 'select e''' || quote_literal($1) ||  into r;
>return r;
>  end;
>  $function$;
>
> Attention: This is ugly and possible sql injection vulnerable!!! But there
> is not another way. The fix is in queue
>
> https://www.postgresql.org/docs/current/functions-string.html
> quote_literal ( text ) → text
>   Returns the given string suitably quoted to be used as a string literal in 
> an SQL statement string.
>   Embedded single-quotes and backslashes are properly doubled.
>   Note that quote_literal returns null on null input; if the argument might 
> be null, quote_nullable is often more suitable.
>   See also Example 42.1. quote_literal(E'O\'Reilly') → 'O''Reilly'
>
> It is even more ugly but would it at least help with the SQL injection
> risk?
>

it cannot  work, because \ will be  replaced by \\

postgres=# CREATE OR REPLACE FUNCTION public.unistr(text)
 RETURNS text
 LANGUAGE plpgsql
 IMMUTABLE STRICT
AS $function$
declare r text;
begin
  execute 'select ' || quote_literal($1) into r;
  return r;
end;
$function$
;
CREATE FUNCTION
postgres=# select unistr('Az ad\u00f3kulcsonk\u00e9nti');
┌──┐
│unistr│
╞══╡
│ Az ad\u00f3kulcsonk\u00e9nti │
└──┘
(1 row)



Gavan Schneider
> ——
> Gavan Schneider, Sodwalls, NSW, Australia
> Explanations exist; they have existed for all time; there is always a
> well-known solution to every human problem — neat, plausible, and wrong.
> — H. L. Mencken, 1920
>


RE: SQL to query running transactions with subtransactions that exceeds 64

2021-01-06 Thread Li EF Zhang
Thanks for your reply. I read it.
When I restart secondary node, it reported "DEBUG:  recovery snapshot waiting for non-overflowed snapshot or until oldest active xid on standby is at least 4739126 (now 1422751)#0122020-12-21 00:00:24.415 UTC [146-15455940] CONTEXT:  WAL redo at 4/741941F8 for Standby/RUNNING_XACTS: nextXid 5137417 latestCompletedXid 5137409 oldestRunningXid 1422751; 17 xacts: 2716862 2721890 4665244 2495592 2289138 5137416 2288820 2287653 1422751 4280517 2288510 2287620 3297674 1757103 5137219 3320989 2259670; subxid ovf"
I want to find the oldest running transaction(1422751) using the following SQL. But it showed that there was no transactions that running more than 5 minutes. 
How can I find the oldest running transaction?

SELECT
  pid,
  now() - pg_stat_activity.query_start AS duration,
  query,
  state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes';
Thanks,
Lily
- Original message -From: Laurenz Albe To: Li EF Zhang , pgsql-general@lists.postgresql.orgCc:Subject: [EXTERNAL] Re: SQL to query running transactions with subtransactions that exceeds 64Date: Wed, Jan 6, 2021 12:49 PM 
On Tue, 2021-01-05 at 10:42 +, Li EF Zhang wrote:> I am new to postgresql and sql. I want to check running transactions whose>  subtransactions exceeds 64. Is there any SQL statement or other method can>  do this? Thanks!You may want to readhttps://www.cybertec-postgresql.com/en/subtransactions-and-performance-in-postgresql/ You could export a snapshot with pg_export_snapshot() and see if the resulting filecontains the line "sof:1".Yours,Laurenz Albe--Cybertec | https://www.cybertec-postgresql.com  
 





Using more than one LDAP?

2021-01-06 Thread Paul Förster
Hi,

can I use more than one LDAP server entry in pg_hba.conf? My tests show that 
only the first one is used.

Example:

ldap1.domain contains user1, user2
ldap2.another.domain contains user3, user4

All 4 users have an account in a PostgreSQL cluster.

ldap1 is openLDAP and ldap2 is Windows AD. Both naturally have different 
parameters/options and as such require two different lines in pg_hba.conf.

If I connect as user1 or user2, it works. If I try to connect as user3 or 
user4, it fails because ldap1 reports the user as non-existent, which is 
correct for ldap1. But in this case, ldap2 is never asked.

How can I solve this dilemma?

Cheers,
Paul



Re: Using more than one LDAP?

2021-01-06 Thread Magnus Hagander
On Wed, Jan 6, 2021 at 3:38 PM Paul Förster  wrote:
>
> Hi,
>
> can I use more than one LDAP server entry in pg_hba.conf? My tests show that 
> only the first one is used.
>
> Example:
>
> ldap1.domain contains user1, user2
> ldap2.another.domain contains user3, user4
>
> All 4 users have an account in a PostgreSQL cluster.
>
> ldap1 is openLDAP and ldap2 is Windows AD. Both naturally have different 
> parameters/options and as such require two different lines in pg_hba.conf.
>
> If I connect as user1 or user2, it works. If I try to connect as user3 or 
> user4, it fails because ldap1 reports the user as non-existent, which is 
> correct for ldap1. But in this case, ldap2 is never asked.
>
> How can I solve this dilemma?

Only if you can create rules in your pg_hba.conf file that knows where
the users are. You can specify multiple servers on one line, but that
only balances across servers that don't work. If a server replies "no"
to a response, PostgreSQL will not move on to the next one. So you
have to make it initially pick the correct rule.

And what would you do if user5 exists in both the two ldap servers?

One hacky way you could do it is create a group role for each server,
maintained by some cron job, that indicates with LDAP server the user
is on. You can then use group matching to pick the correct rule in
pg_hba. It's kind of an ugly hack though..

You'd probably be better off to have a federated ldap server that has
a view of both servers, and use that.

Or even better, since one of your nodes is AD, it speaks Kerberos.
Setting up a Kerberos trust between the two environments would make it
possible to do things like regexp matching on the realm in
pg_ident.conf, and as a bonus you get Kerberos which is a lot more
secure than ldap for auth..  It might have a slightly higher barrier
of entry, but could probably pay off well in a case like this.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




Re: Using more than one LDAP?

2021-01-06 Thread Paul Förster
Hi Magnus,

> On 06. Jan, 2021, at 15:48, Magnus Hagander  wrote:
> 
> Only if you can create rules in your pg_hba.conf file that knows where
> the users are. You can specify multiple servers on one line, but that
> only balances across servers that don't work. If a server replies "no"
> to a response, PostgreSQL will not move on to the next one. So you
> have to make it initially pick the correct rule.

that unfortunately is not an option, partly because LDAP and AD use different 
options and also, as you already mentioned it, if one server says no, it's no.

> And what would you do if user5 exists in both the two ldap servers?

that wouldn't matter as long as user5 exists on the database and can be 
authenticated by either LDAP.

> One hacky way you could do it is create a group role for each server,
> maintained by some cron job, that indicates with LDAP server the user
> is on. You can then use group matching to pick the correct rule in
> pg_hba. It's kind of an ugly hack though..

that sounds really hacky. ;-)

> You'd probably be better off to have a federated ldap server that has
> a view of both servers, and use that.

can't do that either. I have no control over both LDAP services. PostgreSQL is 
just a consumer and I can't make any of the two LDAPs to sync onto each other.

> Or even better, since one of your nodes is AD, it speaks Kerberos.
> Setting up a Kerberos trust between the two environments would make it
> possible to do things like regexp matching on the realm in
> pg_ident.conf, and as a bonus you get Kerberos which is a lot more
> secure than ldap for auth..  It might have a slightly higher barrier
> of entry, but could probably pay off well in a case like this.

that'd require me to recompile and redistribute the PostgreSQL software. I only 
have openLDAP compiled into it but no GSSAPI. While this could be possible, it 
would also mean service interruption, almost not possible in a 24x7 
environment. Also, and I'm no expert on this, it would require me to get 
certificates and configure them, and so on, right?

I thought of a pg_ident.conf configuration. In fact, it's more of a prefix 
change. The complete situation is like this:

ldap1 knows aaa-u1, aaa-u2, and so on
ldap2 knows bbb-u1, bbb-u2, and so on

So, I thought, I could create a pg_ident.conf like this:

mymap   /^aaa-(.*)$   bbb-\1

Then pg_ctl reload of course. But that doesn't seem to work. Maybe I'm trying 
something wrong here.

Cheers,
Paul



Re: Using more than one LDAP?

2021-01-06 Thread Magnus Hagander
On Wed, Jan 6, 2021 at 4:39 PM Paul Förster  wrote:
>
> Hi Magnus,
>
> > On 06. Jan, 2021, at 15:48, Magnus Hagander  wrote:
> >
> > Only if you can create rules in your pg_hba.conf file that knows where
> > the users are. You can specify multiple servers on one line, but that
> > only balances across servers that don't work. If a server replies "no"
> > to a response, PostgreSQL will not move on to the next one. So you
> > have to make it initially pick the correct rule.
>
> that unfortunately is not an option, partly because LDAP and AD use different 
> options and also, as you already mentioned it, if one server says no, it's no.
>
> > And what would you do if user5 exists in both the two ldap servers?
>
> that wouldn't matter as long as user5 exists on the database and can be 
> authenticated by either LDAP.
>
> > One hacky way you could do it is create a group role for each server,
> > maintained by some cron job, that indicates with LDAP server the user
> > is on. You can then use group matching to pick the correct rule in
> > pg_hba. It's kind of an ugly hack though..
>
> that sounds really hacky. ;-)

Yes. But you have a really hacky environment :P


> > You'd probably be better off to have a federated ldap server that has
> > a view of both servers, and use that.
>
> can't do that either. I have no control over both LDAP services. PostgreSQL 
> is just a consumer and I can't make any of the two LDAPs to sync onto each 
> other.

You could have a third LDAP instance that federates the other two.

Another option could be to proxy it through something like FreeRADIUS.
I'm fairly certain it can also move on to a secondary server if the
first one reports login failure.


> > Or even better, since one of your nodes is AD, it speaks Kerberos.
> > Setting up a Kerberos trust between the two environments would make it
> > possible to do things like regexp matching on the realm in
> > pg_ident.conf, and as a bonus you get Kerberos which is a lot more
> > secure than ldap for auth..  It might have a slightly higher barrier
> > of entry, but could probably pay off well in a case like this.
>
> that'd require me to recompile and redistribute the PostgreSQL software. I 
> only have openLDAP compiled into it but no GSSAPI. While this could be 
> possible, it would also mean service interruption, almost not possible in a 
> 24x7 environment. Also, and I'm no expert on this, it would require me to get 
> certificates and configure them, and so on, right?

I assume you're not using any of the standard packagings then, as I
believe they all come with support for GSSAPI. Yet another reason why
it's a good idea to use that :)

And no, gssapi does not use certificates.


> I thought of a pg_ident.conf configuration. In fact, it's more of a prefix 
> change. The complete situation is like this:
>
> ldap1 knows aaa-u1, aaa-u2, and so on
> ldap2 knows bbb-u1, bbb-u2, and so on
>
> So, I thought, I could create a pg_ident.conf like this:
>
> mymap   /^aaa-(.*)$   bbb-\1
>
> Then pg_ctl reload of course. But that doesn't seem to work. Maybe I'm trying 
> something wrong here.

pg_ident only works for authentication methods where the username
comes from the other system, such as with Kerberos. It does not work
for LDAP, where the username is specified in PostgreSQL.


-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




Re: LDAP(s) doc misleading

2021-01-06 Thread Stephen Frost
Greetings,

* Paul Förster (paul.foers...@gmail.com) wrote:
> I found this because I'm in the process of making our Linux LDAP servers 
> obsolete by reconfiguring PostgreSQL to use our company Windows Active 
> Directory LDAPS service.

When in an Active Directory environment, it's far more secure to use
Kerberos/GSSAPI and not LDAP (or LDAPS).  Using the ldap authentication
method with PostgreSQL will result in the credentials of users being
sent to the database server, such that if the database server is
compromised so will all of those user accounts.

Thanks,

Stephen


signature.asc
Description: PGP signature


Error messages on duplicate schema names

2021-01-06 Thread Andrus

Hi!

ALTER DEFAULT PRIVILEGES IN SCHEMA public,public   GRANT all ON TABLES 
TO testoig;


Throws strange error

Tuple already updated by self

In other case which I posted duplicate schema causes another strange error

duplicate key value violates unique constraint 
"pg_default_acl_role_nsp_obj_index"DETAIL: Key (defaclrole, 
defaclnamespace, defaclobjtype)=(30152, 186783649, r) already exists.


Should duplicate schema names accepted or should their usage throw 
better error messages.


Andrus.



Keep needing to run manual analyze

2021-01-06 Thread Rob Northcott
We have an application that uses a Postgres database (currently 9.6).  All the 
autovacuum/analyze settings are just left at the defaults.
We've had a few instances recently where users have complained of slow 
performance and running a manual Analyze (not vacuum) on their database has 
resolved it.
Should it be considered normal that this needs to be done manually periodically 
even when autovacuum is on?
It's not a very heavy application by a lot of people's standards (it's just a 
point-of-sale/stock-control system) but there are a couple of hundred databases 
on the server, all in fairly constant use through the day.
Is it possible that the autovacuum/analyze isn't running for some reason?  If 
so, is there anything I should look at to make it work better or should I just 
be looking at scheduling a manual vacuum/analyze periodically on all databases?

I've had a search around but there seem to be lots of differing opinions on 
what needs to be done manually and what should be handled by the autovacuum.

Any hints most welcome...

Med vänlig hälsning / Best Regards

Rob Northcott
Software Developer (UK Office, formerly TEAM Systems)

Phone   +44 1752 712052

Compilator AB
Södergatan 22
SE-211 34 Malmö
Sweden
www.compilator.com

[Asset 2@2x]

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL AND IS THUS FOR USE ONLY BY THE INTENDED RECIPIENT. IF YOU RECEIVED 
THIS IN ERROR, PLEASE CONTACT THE SENDER AND DELETE THE E-MAIL AND ITS 
ATTACHMENTS FROM ALL COMPUTERS.



Re: Keep needing to run manual analyze

2021-01-06 Thread Michael Lewis
On Wed, Jan 6, 2021 at 10:29 AM Rob Northcott 
wrote:

> We have an application that uses a Postgres database (currently 9.6).  All
> the autovacuum/analyze settings are just left at the defaults.
>
> We’ve had a few instances recently where users have complained of slow
> performance and running a manual Analyze (not vacuum) on their database has
> resolved it.
>
> Should it be considered normal that this needs to be done manually
> periodically even when autovacuum is on?
>
> It’s not a very heavy application by a lot of people’s standards (it’s
> just a point-of-sale/stock-control system) but there are a couple of
> hundred databases on the server, all in fairly constant use through the day.
>

You may need more workers, and a higher cost limit before work is paused
for cost_delay. Depending how many tables per database in the cluster, more
workers would likely be ideal, or *maybe* a smaller naptime if there are
tons of tables overall and all of them are relatively small/see little
changes.

It really depends on your workload and *why* the tables aren't getting
analyzed as frequently as you need. If your cost limit/delay mean that the
auto vacuum/analyze is rather throttled (and default settings would be that
situation given today's I/O throughput on any decent production machine),
and you have some large tables with many large indexes are constantly in
need of vacuuming and you don't have sufficient maintenance work memory
configured to avoid re-scanning the indexes repeatedly to get the work
done... you may never be getting around to the other tables. If you have a
table that is (nearly) all inserts, then a periodic vacuum/analyze done
manually is prudent before PG13.

Are you logging all auto vaccums/analyzes and able to run a pg badger or
similar analysis on it? It would be helpful to see some stats on what is
going on currently.


Re: CROSSTAB( .. only one column has values... )

2021-01-06 Thread Daniel Verite
(resent to the list, previous post was rejected)

   Pavel Stehule wrote:

> > *That* is a function of how Postgres set returning functions work, and not
> > specific to crosstab(). It is not easily fixed. Patches to fix that would
> > be
> > welcomed!
> >
> 
> https://www.postgresql.org/message-id/flat/CAFj8pRC%2BhNzpH%2B0bPRCnqNncUCGjEvpwX%2B0nbhb1F7gwjYZZNg%40mail.gmail.com#9b3fbdd968c271668049a103bfc93880

Oracle has Polymorphic Table Function but still it doesn't work
seamlessly for dynamic pivots.
This blog post illustrates why:
https://blog.sqlora.com/en/dynamic-pivot-with-polymorphic-table-function/
The "related posts" at the end are also interesting about this feature.

Even before considering more dynamic SRFs in postgres, having a
variable number of columns for a DML statement is problematic
in general.

When an SQL statement is parsed/prepared, it's not supposed to
change its output structure, unless a DDL statement intervenes.
The dynamic pivot must break this promise, since a simple INSERT or
UPDATE or DELETE in the pivoting rows may cause the number of
output columns to change across invocations of the same statement.
That would mean that PREPARE would be basically unusable or
unreliable for such statements.

I think the query parser is also not supposed to read data outside
of the catalogs to determine the output structure of a query.
This promise would also need to be broken by dynamic pivots
implemented as a single-pass DML query.

On the other hand, dynamic pivots can be done by generating the SQL
dynamically and  getting at the results in a second step, or returning a
resultset embedded in a scalar (json).
Speaking of features that don't exist but might someday, ISTM that
CALL crosstab_dynamic(...) was more plausible than
SELECT * FROM crosstab_dynamic(...), since CALL
doesn't have the same contingencies as SELECT.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: https://www.manitou-mail.org
Twitter: @DanielVerite




Re: CROSSTAB( .. only one column has values... )

2021-01-06 Thread Pavel Stehule
Hi

st 6. 1. 2021 v 21:47 odesílatel Daniel Verite 
napsal:

> Pavel Stehule wrote:
>
> > > *That* is a function of how Postgres set returning functions work, and
> not
> > > specific to crosstab(). It is not easily fixed. Patches to fix that
> would
> > > be
> > > welcomed!
> > >
> >
> >
> https://www.postgresql.org/message-id/flat/CAFj8pRC%2BhNzpH%2B0bPRCnqNncUCGjEvpwX%2B0nbhb1F7gwjYZZNg%40mail.gmail.com#9b3fbdd968c271668049a103bfc93880
>
> Oracle has Polymorphic Table Function but still it doesn't work
> seamlessly for dynamic pivots.
> This blog post illustrates why:
> https://blog.sqlora.com/en/dynamic-pivot-with-polymorphic-table-function/
> The "related posts" at the end are also interesting about this feature.
>
> Even before considering more dynamic SRFs in postgres, having a
> variable number of columns for a DML statement is problematic
> in general.
>
> When an SQL statement is parsed/prepared, it's not supposed to
> change its output structure, unless a DDL statement intervenes.
> The dynamic pivot must break this promise, since a simple INSERT or
> UPDATE or DELETE in the pivoting rows may cause the number of
> output columns to change across invocations of the same statement.
> That would mean that PREPARE would be basically unusable or
> unreliable for such statements.
>

yes

polymorphic functions need two steps. First step returns structure, second
data.

The prepared statements can be supported, but there should be a recheck if
the result has expected structure. And maybe in future, the prepared
statements can be more dynamic, and can be able to do replaning when it
will be necessary.


> I think the query parser is also not supposed to read data outside
> of the catalogs to determine the output structure of a query.
> This promise would also need to be broken by dynamic pivots
> implemented as a single-pass DML query.
>
> On the other hand, dynamic pivots can be done by generating the SQL
> dynamically and  getting at the results in a second step, or returning a
> resultset embedded in a scalar (json).
> Speaking of features that don't exist but might someday, ISTM that
> CALL crosstab_dynamic(...) was more plausible than
> SELECT * FROM crosstab_dynamic(...), since CALL
> doesn't have the same contingencies as SELECT.
>
>
Yes, it is the Sybase way and it can be useful. But you cannot work with
the returned result more.

For users it isn't too important if they have to use polymorphic functions
or dynamic recordset or a PIVOT clause. Important is performance and
ergometry.  Polymorphic functions can be used for more tasks than pivoting
(reading some external sources, ...). Dynamic recordsets or multi
recordsets can be nice features too. I like multi recordsets for reporting.
And Oracle's PIVOT clause is just handy and doesn't require programming.

Regards

Pavel



>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: https://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: Keep needing to run manual analyze

2021-01-06 Thread Laurenz Albe
On Wed, 2021-01-06 at 17:28 +, Rob Northcott wrote:
> We have an application that uses a Postgres database (currently 9.6).  All 
> the autovacuum/analyze settings are just left at the defaults.
> We’ve had a few instances recently where users have complained of slow 
> performance and running a manual Analyze
>  (not vacuum) on their database has resolved it.
> Should it be considered normal that this needs to be done manually 
> periodically even when autovacuum is on?
> It’s not a very heavy application by a lot of people’s standards (it’s just a 
> point-of-sale/stock-control system)
>  but there are a couple of hundred databases on the server, all in fairly 
> constant use through the day.
> Is it possible that the autovacuum/analyze isn’t running for some reason?  If 
> so, is there anything I
>  should look at to make it work better or should I just be looking at 
> scheduling a manual vacuum/analyze
>  periodically on all databases?

That is not necessary.

You should figure out what tables need more frequent ANALYZE runs to get good 
execution plans
and tune autoanalyze for these tables, for example by reducing the scale factor 
for them.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: SQL to query running transactions with subtransactions that exceeds 64

2021-01-06 Thread Laurenz Albe
On Wed, 2021-01-06 at 11:14 +, Li EF Zhang wrote:
> When I restart secondary node, it reported
>  "DEBUG:  recovery snapshot waiting for non-overflowed snapshot or until 
> oldest active xid on standby is at least 4739126 (now 1422751)
> CONTEXT:  WAL redo at 4/741941F8 for Standby/RUNNING_XACTS: nextXid 5137417 
> latestCompletedXid 5137409 oldestRunningXid 1422751; 17 xacts: 2716862 
> 2721890 4665244 2495592 2289138 5137416 2288820
> 2287653 1422751 4280517 2288510 2287620 3297674 1757103 5137219 3320989 
> 2259670; subxid ovf"
> I want to find the oldest running transaction(1422751) using the following 
> SQL.
>  But it showed that there was no transactions that running more than 5 
> minutes. 
> How can I find the oldest running transaction?
> 
> SELECT
>   pid,
>   now() - pg_stat_activity.query_start AS duration,
>   query,
>   state
> FROM pg_stat_activity
> WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes';

Use xact_start rather than query_start.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com