Re: Reset sequence to current maximum value of rows

2024-06-14 Thread Alban Hertroys
> company_nbr |  company_name 
> -+-
>   1 | Markowitz Herbold PC
>   2 | Markowitz Herbold PC
>   3 | Markowitz Herbold PC
>   4 | Markowitz Herbold PC
>   5 | Markowitz Herbold PC
>   6 | Markowitz Herbold PC
>   7 | Markowitz Herbold PC
>   8 | Markowitz Herbold PC
>   9 | Markowitz Herbold PC
>  10 | Markowitz Herbold PC
>  11 | Markowitz Herbold PC
>  12 | Markowitz Herbold PC
>  13 | Markowitz Herbold PC
>  14 | Markowitz Herbold PC
>  15 | Markowitz Herbold PC
>  16 | Markowitz Herbold PC
>  17 | Markowitz Herbold PC
>  18 | Markowitz Herbold PC
>  19 | Markowitz Herbold PC
>  20 | Markowitz Herbold PC
>  22 | Markowitz Herbold PC
>  23 | Markowitz Herbold PC
> --More--

Did those rows contain these values in some earlier transaction in your 
data-entry process perhaps? I’m thinking that perhaps you overwrote them in a 
later transaction with the correct values for the names, but forgot to commit 
that transaction?

It’s either that, or you did run an UPDATE statement against those rows without 
specifying a WHERE-clause, as others already suggested as a likely cause.


I think we can rule out the possibility of index corruption (a very rare 
occurrence, usually caused by factors external to PG) for your case. A data-set 
this limited would most likely result in an execution plan using a sequential 
scan instead of an index scan (an EXPLAIN ANALYZE of above select statement 
would show proof).

> It might be quicker for me to restore the entire database from that backup
> and then insert all new table rows since I have saved all the scripts.

If you end up in the same situation again after doing that, then you know at 
least it’s repeatable and can analyse how you got there.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.





Re: Configure autovacuum

2024-06-14 Thread Laurenz Albe
On Fri, 2024-06-14 at 06:20 +, Shenavai, Manuel wrote:
> I would like to configure the autovacuum in a way that it runs very frequently
> (i.e. after each update-statement). I tried the following settings on my 
> table:
> alter table mytable set (autovacuum_vacuum_scale_factor  = 0.0);
> alter table mytable set (autovacuum_vacuum_cost_delay  = 0.0);
> alter table mytable set (autovacuum_vacuum_cost_limit  = 1);
> alter table mytable set (autovacuum_vacuum_threshold  = 1);
>  
> I do a lot of updates on a single tuple and I would expect that the autovacuum
> would start basically after each update (due to 
> autovacuum_vacuum_threshold=1).
> But the autovacuum is not running.
>  
> Is it possible to configure postgres to autovacuum very aggressively
> (i.e. after each update-statement)?

The formula in the source code is

/* Determine if this table needs vacuum or analyze. */
*dovacuum = force_vacuum || (vactuples > vacthresh) ||
(vac_ins_base_thresh >= 0 && instuples > vacinsthresh);

So you need to update at least two rows to exceed the threshold.

If you want a single update to trigger autovacuum, you have to set
"autovacuum_vacuum_threshold" to 0.

I cannot imagine a situation where such a setting would be beneficial.
Particularly if you have lots of updates, this will just burn server resources
and may starve out other tables that need VACUUM.

Yours,
Laurenz Albe




Re: Reserving GUC prefixes from a non-preloaded DB extension is not always enforced

2024-06-14 Thread Laurenz Albe
On Thu, 2024-06-13 at 12:26 -0700, Narek Galstyan wrote:
> I am an extension developer. I use `MarkGUCPrefixReserved` to reserve GUC 
> prefixes,
> which my extension uses to help avoid accidentally misspelled config-file 
> entries.
> 
> However, since the reservation happens in `_PG_init()` and `_PG_init()` is not
> called until the first use of an API exposed by my extension, misspelled 
> config-file
> entries that get executed before the extension is loaded will not throw an 
> error.
> 
> I'd expect GUC variables reserved by an extension to live more permanently in
> Postgres catalogs (e.g., in pg_settings). 
> So, even when the extension binary is not loaded, Postgres would know which 
> prefixes
> are reserved and which GUC settings must be allowed (similar to how Postgres 
> knows
> in pg_extension which extensions are enabled, even when the corresponding 
> extension
> binary has not been loaded).
> 
> > 1. Would you consider the proposed behavior an improvement?

Not really.

If I wanted to avoid that problem, I'd put the extension in 
"shared_preload_libraries",
so that _PG_init() is executed when the server starts.

Yours,
Laurenz Albe




Re: Configure autovacuum

2024-06-14 Thread Ron Johnson
On Fri, Jun 14, 2024 at 2:20 AM Shenavai, Manuel 
wrote:

> Hi everyone,
>
>
>
> I would like to configure the autovacuum in a way that it runs very
> frequently (i.e. after each update-statement). I tried the following
> settings on my table:
>
> alter table mytable set (autovacuum_vacuum_scale_factor  = 0.0);
>
> alter table mytable set (autovacuum_vacuum_cost_delay  = 0.0);
>
> alter table mytable set (autovacuum_vacuum_cost_limit  = 1);
>
> alter table mytable set (autovacuum_vacuum_threshold  = 1);
>
>
>
> I do a lot of updates on a single tuple and I would expect that the
> autovacuum would start basically after each update (due to
> autovacuum_vacuum_threshold=1). But the autovacuum is not running.
>

HOT is probably what you're looking for:
https://www.postgresql.org/docs/14/storage-hot.html

Presuming that the field you're updating is not indexed, and the table can
be exclusively locked for as long as it takes to rewrite it: give the table
"more space to work" in each page:
ALTER TABLE foo SET (fillfactor = 30);
VACUUM FULL foo;

Then you don't need to VACUUM s frequently.


Re: Question about UNIX socket connections and SSL

2024-06-14 Thread Casey & Gina
> On Jun 13, 2024, at 6:47 AM, Daniel Gustafsson  wrote:
> 
> While not strictly that, there was a patch not too long ago for teaching
> postgres the PROXY protocol.

As I understand it, PROXY protocol support would be nice if one connects 
through haproxy on standalone hosts, so that postgres could show the 
originating app servers as the client_addr / client_hostname.  We used to have 
standalone host haproxies, but moved to haproxy instances on each app node for 
performance and scalability reasons (many app nodes).  I guess it could also 
help if we were to run pgbouncer on the db nodes?

We're using haproxy to route connections to the appropriate database nodes - RW 
connections go to the current master in the cluster, and RO are balanced 
between replicas.  It seems that libpq could allow SSL on UNIX sockets which 
would avoid having to utilize TCP for the local connections from the 
application to haproxy.

Is there any way to utilize sslmode=verify-full through something routing 
connections to the appropriate database instances, whether that's with haproxy 
or something else?

-- 
Thanks,
- Casey



Re: Configure autovacuum

2024-06-14 Thread Adrian Klaver

On 6/13/24 23:20, Shenavai, Manuel wrote:

Hi everyone,

I would like to configure the autovacuum in a way that it runs very 
frequently (i.e. after each update-statement). I tried the following 


Why?

What is the problem you are trying to solve?


settings on my table:

alter table mytable set (autovacuum_vacuum_scale_factor  = 0.0);

alter table mytable set (autovacuum_vacuum_cost_delay  = 0.0);

alter table mytable set (autovacuum_vacuum_cost_limit  = 1);

alter table mytable set (autovacuum_vacuum_threshold  = 1);

I do a lot of updates on a single tuple and I would expect that the 
autovacuum would start basically after each update (due to 
autovacuum_vacuum_threshold=1). But the autovacuum is not running.


Is it possible to configure postgres to autovacuum very aggressively 
(i.e. after each update-statement)?


Thanks in advance &

Best regards,

Manuel



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: DROP COLLATION vs pg_collation question

2024-06-14 Thread Karsten Hilbert
Am Thu, Jun 13, 2024 at 09:49:46AM +0200 schrieb Laurenz Albe:

> > > Note that with DROP COLLATION you can only remove collations
> > > that belong to the encoding of your current database.
>
> src/backend/catalog/namespace.c:
>
>   /*
>* get_collation_oid - find a collation by possibly qualified name
>*
>* Note that this will only find collations that work with the current
>* database's encoding.
>*/
>   Oid
>   get_collation_oid(List *collname, bool missing_ok)

Thanks.

Are collations per-database or per-cluster objects ?

I am asking because I seem to not be enabled to

1) use a collation that's intended for an encoding different
   from the database encoding

-> makes sense

2) remove a collation that's intended for an encoding
   different from the database encoding

-> so far so good, ignore them, but

3) update collation version information in pg_collations for
   collations intended for an encoding different from the
   database encoding (ALTER COLLATION ... REFRESH VERSION fails)

which in effect would mean that -- upon change of collation
versions in the underlying operating system (ICU update, libc
update) -- one would have to live with outdated version
information in pg_collations short of dump/sed/restore or
some such ?

I'm pretty sure I am overlooking something.

Best,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B