CREATE TABLE AS SELECT hangs

2018-10-01 Thread derek

Hello,

I am trying to populate/create a database with CREATE TABLE AS SELECT 
like so:


   CREATE TABLE test_table AS
   SELECT row_number() over() as gid, cells.geom
   FROM test_geom_function(1,2,3) AS cells;

This works on one database instance, but not on another. On the database 
it doesn't work on it seems to hang on executing and when I cancel it I get


   ERROR:  canceling statement due to user request
   CONTEXT:  while inserting index tuple (6,13) in relation
   "pg_type_typname_nsp_index"
   SQL state: 57014

Any help/suggestions would be greatly appreciated.  I am running 
PostgreSQL 9.6 with PostGIS also for some spatial functionality.


Thanks,

Derek



Re: CREATE TABLE AS SELECT hangs

2018-10-02 Thread derek

Tom.

Thanks so much for your response. Your theory appears to have been 
correct, and it is working like a champ now.


Best,

Derek

On 10/1/2018 4:29 PM, Tom Lane wrote:

derek  writes:

I am trying to populate/create a database with CREATE TABLE AS SELECT
like so:
 CREATE TABLE test_table AS
 SELECT row_number() over() as gid, cells.geom
 FROM test_geom_function(1,2,3) AS cells;
This works on one database instance, but not on another. On the database
it doesn't work on it seems to hang on executing and when I cancel it I get
 ERROR:  canceling statement due to user request
 CONTEXT:  while inserting index tuple (6,13) in relation
 "pg_type_typname_nsp_index"
 SQL state: 57014

Hmm (pokes around) ... That error context message seems to only be
possible if we were blocked waiting for some other transaction.
I theorize that you have an uncommitted transaction someplace that
has created the same table name.  Cancelling it would fix things.

regards, tom lane





GIST/GIN index not used with Row Level Security

2019-08-13 Thread Derek Hans
When using row level security, GIN and GIST indexes appear to get ignored.
Is this expected behavior? Can I change the query to get PostgreSQL using
the index? For example, with RLS enabled, this query:

select * from search where search like '%yo'

Creates this query plan:
"Seq Scan on search  (cost=0.00..245.46 rows=1 width=163)"
"  Filter: (((tenant_name)::name = CURRENT_USER) AND (search ~~
'%yo'::text))"

Running this same query with the owner of the table, thereby disabling RLS,
the index gets used as expected:
"Bitmap Heap Scan on search  (cost=4.49..96.33 rows=44 width=163)"
"  Recheck Cond: (search ~~ '%yo'::text)"
"  ->  Bitmap Index Scan on search__gist  (cost=0.00..4.48 rows=44 width=0)"
"Index Cond: (search ~~ '%yo'::text)"

I see the same behavior with more complex queries, switching to GIN index,
more complex RLS rules, using word_similarity instead of like, using full
text search and larger data sets (e.g. 100k rows). This is on PostgreSQL
v11.1 on Windows 10.

-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: GIST/GIN index not used with Row Level Security

2019-08-13 Thread Derek Hans
>
>
> What are the RLS policies on the table?
>
> From select * from pg_policies:
"((tenant_name)::name = CURRENT_USER)"


> What is the definition of the GIN index?
>
> CREATE INDEX search__gist
ON public.search USING gist
(search COLLATE pg_catalog."default" gist_trgm_ops)
TABLESPACE pg_default;


> Best guess is the RLS is preventing access to the field needed by the
> index.
>
> I didn't realize RLS can limit access to a specific field/index - my
understanding was that it only affects what rows get returned/can be
update/inserted.


>
> >
> > select * from search where search like '%yo'
> >
> > Creates this query plan:
> > "Seq Scan on search  (cost=0.00..245.46 rows=1 width=163)"
> > "  Filter: (((tenant_name)::name = CURRENT_USER) AND (search ~~
> > '%yo'::text))"
> >
> > Running this same query with the owner of the table, thereby disabling
> > RLS, the index gets used as expected:
> > "Bitmap Heap Scan on search  (cost=4.49..96.33 rows=44 width=163)"
> > "  Recheck Cond: (search ~~ '%yo'::text)"
> > "  ->  Bitmap Index Scan on search__gist  (cost=0.00..4.48 rows=44
> width=0)"
> > "Index Cond: (search ~~ '%yo'::text)"
> >
> > I see the same behavior with more complex queries, switching to GIN
> > index, more complex RLS rules, using word_similarity instead of like,
> > using full text search and larger data sets (e.g. 100k rows). This is on
> > PostgreSQL v11.1 on Windows 10.
> >
> > --
> > *Derek*
> > +1 (415) 754-0519 |derek.h...@gmail.com <mailto:derek.h...@gmail.com> |
> > Skype: derek.hans
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: GIST/GIN index not used with Row Level Security

2019-08-13 Thread Derek Hans
>
>
> Your example is obscuring the issue by incorporating a tenant_name
> condition (where did that come from, anyway?) in one case and not
> the other.  Without knowing how selective that is, it's hard to
> compare the EXPLAIN results.
>
>
That's RLS kicking in - RLS condition is defined as
((tenant_name)::name = CURRENT_USER)


> However, wild-guess time: it might be that without access to the
> table statistics, the "search like '%yo'" condition is estimated
> to be too unselective to make an indexscan profitable.  And putting
> RLS in the way would disable that access if the ~~ operator is not
> marked leakproof, which it isn't.
>

I didn't realize you could set access to table statistics. How do I enable
this access for this user? If that's not possible, it sounds like it
effectively blocks the use of GIN/GIST indexes when RLS is in use.


> I'm not sure that you should get too excited about this, however.
> You're evidently testing on a toy-size table, else the seqscan
> cost estimate would be a lot higher.  With a table large enough
> to make it really important to guess right, even the default
> selectivity estimate might be enough to get an indexscan.
>
>
I've tried this with larger data sets, with the same results. I discovered
this problem because the select was taking 10-30 seconds instead of the
expected sub-second, when using larger data sets and more fields getting
searched. The example is the simplest repro case I could create.



> regards, tom lane
>


-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: GIST/GIN index not used with Row Level Security

2019-08-13 Thread Derek Hans
Thanks for the pointer for marking functions as leakproof, I was unaware of
that whole concept.

Unfortunately only "alter function" supports "leakproof" - "alter operator"
does not. Is there a function-equivalent for marking operators as
leakproof? Is there any documentation for which operators/functions are
leakproof?

In my particular case, RLS is still useful even if operators are leaky as I
control the application code and therefore can ensure leaky errors are
handled. If it's possible to disable all checking for "leakproof", that
would work for me.

> If that's not possible, it sounds like it
> > effectively blocks the use of GIN/GIST indexes when RLS is in use.
>
> There's a whole lot of daylight between "it doesn't pick an indexscan in
> this one example" and "it effectively blocks the use of GIN/GIST".
>

True indeed :). Would you have a working example of using a GIN/GIST index
with RLS? All the attempts I've made have ended in seq scans. In practice,
I'm looking to implement fuzzy search using trigrams, so % and %> operators
are what matter to me. ~~ also happens to fail. Should I expect to be able
to use any of these with RLS, large amounts of data and reasonable
performance?

Your description of leakproof (and the documentation I've found) makes it
sound like I'm not just hitting an isolated problem, but a general problem
with RLS that represents a substantial limitation and is likely worth
documenting.

-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: GIST/GIN index not used with Row Level Security

2019-08-13 Thread Derek Hans
Thanks for the detailed response, super helpful in understanding what's
happening, in particular understanding the risk of not marking functions as
leakproof. I'll take a look at the underlying code to understand what's
involved in getting a function to be leakproof.

That said, it does seem like it should be possible and reasonable to
specify that a user should have access to the table stats so that the query
planner works as expected. Maybe it comes down to the fact that RLS is
still a work in progress, and I shouldn't be relying on it unless I'm
really certain it supports the functionality I need.

I've updated word_similarity_op(text,text) to be leakproof, and
pg_proc agrees it is. I'm assuming word_similarity_op() is equivalent to
<%, though I haven't found explicit confirmation. However, using
word_similarity() instead of <% on a 100k row table, without any RLS
involved, doesn't make use of the index, while using <% does. Obviously,
adding the RLS doesn't make that any better. Any idea what might be the
cause?


On Tue, Aug 13, 2019 at 5:39 PM Stephen Frost  wrote:

> Greetings,
>
> * Derek Hans (derek.h...@gmail.com) wrote:
> > Unfortunately only "alter function" supports "leakproof" - "alter
> operator"
> > does not. Is there a function-equivalent for marking operators as
> > leakproof? Is there any documentation for which operators/functions are
> > leakproof?
>
> Tom's query downthread provides the complete list.
>
> Note that the list is not completely static- it's entirely possible that
> additional functions can be made leak-proof, what's needed is a careful
> review of the function code to ensure that it can't leak information
> about the data (or, if it does today, a patch which removes that).  If
> you have an interest in that then I'd encourage you to dig into the code
> and look for possible leaks (Tom's already hinted in the direction you'd
> want to go in) and then propose a patch to address those cases and to
> mark the function(s) as leakproof.
>
> > In my particular case, RLS is still useful even if operators are leaky
> as I
> > control the application code and therefore can ensure leaky errors are
> > handled. If it's possible to disable all checking for "leakproof", that
> > would work for me.
>
> There isn't a way to disable the leakproof-checking system.  Certainly
> in the general case that wouldn't be acceptable and I'm not entirely
> convinced by your argument that such an option should exist, though you
> could go through and set all of the functions to be leakproof if you
> really wish to.
>
> > > If that's not possible, it sounds like it
> > > > effectively blocks the use of GIN/GIST indexes when RLS is in use.
> > >
> > > There's a whole lot of daylight between "it doesn't pick an indexscan
> in
> > > this one example" and "it effectively blocks the use of GIN/GIST".
> >
> > True indeed :). Would you have a working example of using a GIN/GIST
> index
> > with RLS? All the attempts I've made have ended in seq scans. In
> practice,
> > I'm looking to implement fuzzy search using trigrams, so % and %>
> operators
> > are what matter to me. ~~ also happens to fail. Should I expect to be
> able
> > to use any of these with RLS, large amounts of data and reasonable
> > performance?
>
> Functions that aren't marked leakproof aren't going to be able to be
> pushed down.
>
> > Your description of leakproof (and the documentation I've found) makes it
> > sound like I'm not just hitting an isolated problem, but a general
> problem
> > with RLS that represents a substantial limitation and is likely worth
> > documenting.
>
> There's some documentation regarding leakproof functions here:
>
> https://www.postgresql.org/docs/current/ddl-rowsecurity.html
>
> and here:
>
> https://www.postgresql.org/docs/11/sql-createfunction.html
>
> Of course, patches are welcome to improve on our documentation.
>
> One thing that it sounds like you're not quite appreciating is that in
> the general case, verifying that a function is leakproof isn't optional.
> Without such a check, any user could create a function and then get PG
> to push that function down below the RLS checks and therefore gain
> access to the data that they aren't supposed to be able to see.
>
> All that said, there's quite a few functions that *are* marked as
> leakproof already and they're quite handy and work well with RLS
> already, as I expect you'll see when you go querying pg_proc.
>
> Thanks,
>
> Stephen
>


-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: GIST/GIN index not used with Row Level Security

2019-08-14 Thread Derek Hans
>
>
> > I've updated word_similarity_op(text,text) to be leakproof, and
> > pg_proc agrees it is. I'm assuming word_similarity_op() is equivalent to
> > <%, though I haven't found explicit confirmation. However, using
> > word_similarity() instead of <% on a 100k row table, without any RLS
> > involved, doesn't make use of the index, while using <% does. Obviously,
> > adding the RLS doesn't make that any better. Any idea what might be the
> > cause?
>
> Just to be clear, you should be looking at pg_operator (oprcode) to
> determine the function that is under the operator that you wish to
> change to being leakproof.
>
>
Thanks for that pointer.


> Note that the selectivity functions are associated with the operator,
> not the function itself.
>

That was the missing piece, thanks. How come operators get optimized but
functions don't?

Quick summary:
The text similarity/full text search/like operators are not marked as
leakproof, which stops them from having access to table statistics. When
combined with row level security, operators that aren't leakproof can't get
pushed down and therefore happen after the RLS check, preventing use of
GIN/GIST indexes. A workaround is marking the underlying function as
leakproof but that is only reasonable because our particular setup makes it
acceptable if information leaks via database error messages.

To resolve:
- Lookup function associated with operator being used via the pg_operator
table
- Check if that function is leakproof based on info in pg_proc table
- ALTER FUNCTION func LEAKPROOF
- Use original operator in code - the underlying function doesn't get
optimized and bypasses the index

While those steps work on my local machine, unfortunately we're deployed on
AWS Aurora which doesn't allow marking functions as leakproof. Functions
are owned by the rdsadmin user and controlled by AWS. In practice, that
appears to mean that fuzzy search/full text search with reasonable
performance isn't compatible with RLS on Amazon Aurora. We may end up
setting up Elasticsearch to support text search. In any case, we need to
separate search from checking who is allowed to see the results.

Thanks for the help from everyone!


Update does not move row across foreign partitions in v11

2019-02-22 Thread Derek Hans
I've set up 2 instances of PostgreSQL 11. On instance A, I created a table
with 2 local partitions and 2 partitions on instance B using foreign data
wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.
Inserting rows into this table works as expected, with rows ending up in
the appropriate partition. However, updating those rows only moves them
across partitions in some of the situations:

   - From local partition to local partition
   - From local partition to foreign partition

Rows are not moved

   - From foreign partition to local partition
   - From foreign partition to foreign partition

Is this the expected behavior? Am I missing something or configured
something incorrectly?

Thanks,
Derek


Re: Update does not move row across foreign partitions in v11

2019-02-27 Thread Derek Hans
Hi all,
This behavior makes the new data sharding functionality in v11 only
marginally useful as you can't shard across database instances.
Considering data sharding appeared to be one of the key improvements in
v11, I'm confused - am I misunderstanding the expected functionality?

Thanks!

On Fri, Feb 22, 2019 at 9:44 AM Derek Hans  wrote:

> I've set up 2 instances of PostgreSQL 11. On instance A, I created a table
> with 2 local partitions and 2 partitions on instance B using foreign data
> wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.
> Inserting rows into this table works as expected, with rows ending up in
> the appropriate partition. However, updating those rows only moves them
> across partitions in some of the situations:
>
>- From local partition to local partition
>- From local partition to foreign partition
>
> Rows are not moved
>
>- From foreign partition to local partition
>- From foreign partition to foreign partition
>
> Is this the expected behavior? Am I missing something or configured
> something incorrectly?
>
> Thanks,
> Derek
>


-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: Update does not move row across foreign partitions in v11

2019-03-04 Thread Derek Hans
Based on a reply to reporting this as a bug, moving rows out of foreign
partitions is not yet implemented so this is behaving as expected. There's
a mention of this limitation in the Notes section of the Update docs.

On Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera 
wrote:

> On 2019-Feb-22, Derek Hans wrote:
>
> > I've set up 2 instances of PostgreSQL 11. On instance A, I created a
> table
> > with 2 local partitions and 2 partitions on instance B using foreign data
> > wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.
> > Inserting rows into this table works as expected, with rows ending up in
> > the appropriate partition. However, updating those rows only moves them
> > across partitions in some of the situations:
> >
> >- From local partition to local partition
> >- From local partition to foreign partition
> >
> > Rows are not moved
> >
> >- From foreign partition to local partition
> >- From foreign partition to foreign partition
> >
> > Is this the expected behavior? Am I missing something or configured
> > something incorrectly?
>
> Sounds like a bug to me.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


RE: segmentation fault

2025-04-23 Thread Zechman, Derek S
We got it from pgdg repository.

Here is the output of pg_config is below.  We have been struggling to get a 
stack trace, however – since we couldn’t reproduce it with a vanilla 
installation and neither could you – we determined it must be something 
specific to our environment.   Turns out when pg_show_plan is in the 
shared_preload_libraries then we get a segmentation fault.  I checked the 
github repo from cybertec and surprised there have been no reported issues with 
it.  I will be posting there shortly.

BINDIR = /usr/pgsql-16/bin
DOCDIR = /usr/pgsql-16/doc
HTMLDIR = /usr/pgsql-16/doc/html
INCLUDEDIR = /usr/pgsql-16/include
PKGINCLUDEDIR = /usr/pgsql-16/include
INCLUDEDIR-SERVER = /usr/pgsql-16/include/server
LIBDIR = /usr/pgsql-16/lib
PKGLIBDIR = /usr/pgsql-16/lib
LOCALEDIR = /usr/pgsql-16/share/locale
MANDIR = /usr/pgsql-16/share/man
SHAREDIR = /usr/pgsql-16/share
SYSCONFDIR = /etc/sysconfig/pgsql
PGXS = /usr/pgsql-16/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE =  '--enable-rpath' '--prefix=/usr/pgsql-16' 
'--includedir=/usr/pgsql-16/include' '--mandir=/usr/pgsql-16/share/man' 
'--datadir=/usr/pgsql-16/share' '--libdir=/usr/pgsql-16/lib' '--with-lz4' 
'--with-zstd' '--enable-tap-tests' '--with-icu' '--with-llvm' '--with-perl' 
'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' '--with-openssl' 
'--with-pam' '--with-gssapi' '--with-includes=/usr/include' 
'--with-libraries=/usr/lib64' '--enable-nls' '--enable-dtrace' 
'--with-uuid=e2fs' '--with-libxml' '--with-libxslt' '--with-ldap' 
'--with-selinux' '--with-systemd' '--with-system-tzdata=/usr/share/zoneinfo' 
'--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/pgsql-16/doc' 
'--htmldir=/usr/pgsql-16/doc/html' 'CFLAGS=-O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-fexceptions -fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 
'LDFLAGS=-Wl,--as-needed' 'LLVM_CONFIG=/usr/bin/llvm-config-64' 
'CLANG=/usr/bin/clang' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
CC = gcc
CPPFLAGS = -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include
CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Werror=vla -Wendif-labels 
-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type 
-Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 
-g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 
-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong 
-grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
CFLAGS_SL = -fPIC
LDFLAGS = -Wl,--as-needed -L/usr/lib64 -L/usr/lib -L/usr/lib64 -Wl,--as-needed 
-Wl,-rpath,'/usr/pgsql-16/lib',--enable-new-dtags
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgcommon -lpgport -lselinux -lzstd -llz4 -lxslt -lxml2 -lpam -lssl 
-lcrypto -lgssapi_krb5 -lz -lreadline -lpthread -lrt -ldl -lm
VERSION = PostgreSQL 16.8

From: Tom Lane 
Sent: Wednesday, April 23, 2025 3:23 PM
To: Zechman, Derek S 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: segmentation fault

"Zechman, Derek S"  writes: > We are getting a 
segmentation fault which seems to be specific to pg16 on redhat 8. Tested on 
pg14 and pg15 with no problems. Also tested with pg16 on redhat 9 - no issues. 
The


"Zechman, Derek S" 
mailto:derek.s.zech...@snapon.com>> writes:

> We are getting a segmentation fault which seems to be specific to pg16 on 
> redhat 8.  Tested on pg14 and pg15 with no problems.  Also tested with pg16 
> on redhat 9 - no issues.  The developer determined that it is specific to 
> select into a defined variable within a function.  We have a reproducible 
> test case that requires no data.  I determined that 2857 is the maximum 
> number that doesn't crash and anything higher than that causes a segfault.  
> Can anyone help with this problem?



FWIW, I cannot replicate this problem using this script.  Now I'm

testing v16 branch tip not 16.6, but a quick trawl through the git

history didn't find any plausibly-matching bug fixes since 16.6.



Where did you get your Postgres server executable from exactly,

and what options was it built with?  (The o

segmentation fault

2025-04-23 Thread Zechman, Derek S
Hello All,

We are getting a segmentation fault which seems to be specific to pg16 on 
redhat 8.  Tested on pg14 and pg15 with no problems.  Also tested with pg16 on 
redhat 9 - no issues.  The developer determined that it is specific to select 
into a defined variable within a function.  We have a reproducible test case 
that requires no data.  I determined that 2857 is the maximum number that 
doesn't crash and anything higher than that causes a segfault.  Can anyone help 
with this problem?

Thanks,
Sean

Fault:

kernel: postmaster[2983369]: segfault at 24d5ffb ip 7f7ae8413c8b sp 
7ffef0642d58 error 4 in libc-2.28.so[7f7ae83ef000+1cd000]
Apr 23 07:33:39 rich-pgrs-30-rv kernel: Code: 4c 16 f0 4d 89 d9 4d 89 d8 49 83 
e0 0f 4c 29 c1 4d 29 c1 4c 29 c2 66 0f 1f 84 00 00 00 00 00 0f 10 01 0f 10 49 
f0 0f 10 51 e0 <0f> 10 59 d0 48 83 c1 c0 48 83 c2 c0 41 0f 29 01 41 0f 29 49 f0 
4

Kernel:
Linux 4.18.0-553.40.1.el8_10.x86_64 #1 SMP Thu Feb 6 21:20:51 EST 2025 x86_64 
x86_64 x86_64 GNU/Linux
postgres (PostgreSQL) 16.6

Query:
DROP FUNCTION IF EXISTS pg_temp.foo();
CREATE FUNCTION pg_temp.foo()
RETURNS integer
LANGUAGE 'plpgsql'

AS $BODY$
DECLARE
bar integer;
BEGIN
-- The issue occurs if function is called in SELECT where # of 
rows is 5000
SELECT 1 INTO bar;
-- The issue does not occur with either SELECT
--bar = 1;
return bar;
END;
$BODY$;

-- 5000 rows
SELECT pg_temp.foo() FROM (SELECT generate_series(1, 5000) AS id) x WHERE x.id 
IN 
(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,798,799,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814,815,816,817,818,819,820,821,822,823,824,825,826,827,828,829,830,831,832,833,834,835,836,837,838,839,840,841,842,843,844,845,846,847,848,849,850,851,852,853,854,855,856,857,858,859,860,861,862,863,864,865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880,881,882,883,884,8

RE: analyze-in-stages post upgrade questions

2025-06-27 Thread Zechman, Derek S
> > We recently performed an upgrade from pg14 (14.18) to pg16 (16.9) and

> > performed the analyze-in-stages post upgrade.  It has been noticed that

> > some plans changed to use hash joins instead of nested loops.  Further

> > investigation found it was because the parent table of partitioned

> > tables did not have stats.  After running an ANALYZE on the parent

> > tables we got similar plan an execution times as before.

> >

> > I have two questions

> >

> > 1 - Why does analyze-in-stages not analyze the parent tables?

> >

> > 2 – What happens if we do not run analyze-in-stages post upgrade and

> > just run an analyze?

>

> It is spelled out in the docs:

>

> https://urldefense.com/v3/__https://www.postgresql.org/docs/current/pgupgrade.html__;!!Lf_9VycLqA!jU5QgA1re0Txg_h2dD1N3XvK_l8hYdyMvpcxrLL5GDyQ5qN4aGQcxmDE8qmaV_p5telTzYmOL6S3fR8eRc0s_8UvnFFR6Ws$<https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/pgupgrade.html__;!!Lf_9VycLqA!jU5QgA1re0Txg_h2dD1N3XvK_l8hYdyMvpcxrLL5GDyQ5qN4aGQcxmDE8qmaV_p5telTzYmOL6S3fR8eRc0s_8UvnFFR6Ws$>

>

> Emphasis added

>

> "Using vacuumdb --all --analyze-only can efficiently generate such

> statistics, and the use of --jobs can speed it up. Option

> --analyze-in-stages can be used to generate **minimal statistics**

> quickly. If vacuum_cost_delay is set to a non-zero value, this can be

> overridden to speed up statistics generation using PGOPTIONS, e.g.,

> PGOPTIONS='-c vacuum_cost_delay=0' vacuumdb "

>

> and from here:

>

> https://urldefense.com/v3/__https://www.postgresql.org/docs/current/app-vacuumdb.html__;!!Lf_9VycLqA!jU5QgA1re0Txg_h2dD1N3XvK_l8hYdyMvpcxrLL5GDyQ5qN4aGQcxmDE8qmaV_p5telTzYmOL6S3fR8eRc0s_8UvvC7gfd0$<https://urldefense.com/v3/__https:/www.postgresql.org/docs/current/app-vacuumdb.html__;!!Lf_9VycLqA!jU5QgA1re0Txg_h2dD1N3XvK_l8hYdyMvpcxrLL5GDyQ5qN4aGQcxmDE8qmaV_p5telTzYmOL6S3fR8eRc0s_8UvvC7gfd0$>

>

> "--analyze-in-stages

>

>  Only calculate statistics for use by the optimizer (no vacuum),

> like --analyze-only. Run three stages of analyze; the first stage uses

> the lowest possible statistics target (see default_statistics_target) to

> produce usable statistics faster, and subsequent stages build the full

> statistics.

>

>  This option is only useful to analyze a database that currently has

> no statistics or has wholly incorrect ones, such as if it is newly

> populated from a restored dump or by pg_upgrade. Be aware that running

> with this option in a database with existing statistics may cause the

> query optimizer choices to become transiently worse due to the low

> statistics targets of the early stages.



Well, that wouldn't explain why it doesn't work on partitioned tables.

I am under the impression that it should.



Derek, can cou share the pg_stats entries for the partitioned table?



Yours,

Laurenz Albe





There are no entries in pg_stats for the parent table until after I manually 
run an analyze on it – Example below



=> select relname, reltuples, relkind from pg_class where relname ~ 
'^chapter_[0-9]+$' or relname='chapter' order by 1;

   relname   | reltuples | relkind

-+---+-

 chapter |-1 | p

 chapter_1   | 4 | r

 chapter_10  | 4 | r

 chapter_100 |30 | r

 chapter_101 |15 | r

 chapter_102 |15 | r

…

=> select count(*) from pg_stats where tablename='chapter';

 count

---

 0

(1 row)



=> analyze chapter;

ANALYZE

=> select relname, reltuples, relkind from pg_class where relkind ='p' and 
relname='chapter';

 relname | reltuples | relkind

-+---+-

 chapter |  7589 | p

(1 row)



=> select count(*) from pg_stats where tablename='chapter';

 count

---

49

(1 row)



toy_epc_stg_1_db=>


RE: analyze-in-stages post upgrade questions

2025-06-27 Thread Zechman, Derek S



We recently performed an upgrade from pg14 (14.18) to pg16 (16.9) and performed 
the analyze-in-stages post upgrade.  It has been noticed that some plans 
changed to use hash joins instead of nested loops.  Further investigation found 
it was because the parent table of partitioned tables did not have stats.  
After running an ANALYZE on the parent tables we got similar plan an execution 
times as before.

I have two questions
1 - Why does analyze-in-stages not analyze the parent tables?
2 – What happens if we do not run analyze-in-stages post upgrade and just run 
an analyze?

“It takes more time, and you don't have any statistics on a given table until 
the ANALYZE on that table completes.

How long did "vacuumdb --analyze-only --jobs=$mumble your_db" take?”

Thanks – that makes sense.  I understand what analyze in stages does just wish 
it would include parent tables.

"vacuumdb --all --analyze-only --jobs=7" took about 75 minutes where the 
analyze-in-stages after upgrade took 115 minutes.  Neither of these activities 
analyzed the parent tables.
Reading more and it seems vacuumdb doesn’t analyze parent tables and a manual 
analyze on those is needed if we want better planner statistics.


analyze-in-stages post upgrade questions

2025-06-27 Thread Zechman, Derek S

We recently performed an upgrade from pg14 (14.18) to pg16 (16.9) and performed 
the analyze-in-stages post upgrade.  It has been noticed that some plans 
changed to use hash joins instead of nested loops.  Further investigation found 
it was because the parent table of partitioned tables did not have stats.  
After running an ANALYZE on the parent tables we got similar plan an execution 
times as before.

I have two questions
1 - Why does analyze-in-stages not analyze the parent tables?
2 - What happens if we do not run analyze-in-stages post upgrade and just run 
an analyze?

Thanks,
Sean


RE: analyze-in-stages post upgrade questions

2025-07-09 Thread Zechman, Derek S
> > Well, that wouldn't explain why it doesn't work on partitioned tables.

> > I am under the impression that it should.

> >

> > Derek, can cou share the pg_stats entries for the partitioned table?

>

> There are no entries in pg_stats for the parent table until after I manually 
> run an analyze on it – Example below



You are right.  I looked at the code, and "vacuumdb" does not process

partitiond tables, even if --analyze-only is specified.  I find that

surprising, aince the SQL command ANALYZE (without a table name) will

also collect statistics for partitioned tables.



I think that it would be a good idea to change that behavior.

In particular, it makes a lot of sense to collect statistics for

partitioned tables after a "pg_upgrade".



Attached is a patch to make "vacuumdb --analyze-only" consider

partitioned tables as well.



Yours,

Laurenz Albe



Is there a plan to include this patch in future releases/patches of postgres?



Thanks,

(Derek) Sean


PgAdmin4 - 'NoneType' object has no attribute 'value'

2021-05-19 Thread Derek van den Nieuwenhuijzen
Hi Postgress,

I used to have a functioning PostgreSQL database, but upon reinstalling due to 
computer issues, I am now getting this error after launching:

Traceback (most recent call last):
  File "C:\Program Files\PostgreSQL\12\pgAdmin 4\web\pgAdmin4.py", line 98, in 

app = create_app()
  File "C:\Program Files\PostgreSQL\12\pgAdmin 4\web\pgadmin\__init__.py", line 
347, in create_app
if not os.path.exists(SQLITE_PATH) or get_version() == -1:
  File "C:\Program Files\PostgreSQL\12\pgAdmin 
4\web\pgadmin\setup\db_version.py", line 19, in get_version
return version.value
AttributeError: 'NoneType' object has no attribute 'value'

I've tried everything from these stackoverflow forums 
(https://stackoverflow.com/questions/43211296/pgadmin4-postgresql-application-server-could-not-be-contacted
 and 
https://stackoverflow.com/questions/67559636/pgadmin4-nonetype-object-has-no-attribute-value/67570989#67570989)
 in order to fix it, however the error remains. I'm out of options, hopefully 
you know the answer.

I tried it with both PostgreSQL 11, 12 and 13. I'm using a windows 64 bit with 
PgAdmin 4.

Kind regards,

Derek
[https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/43211296/pgadmin4-postgresql-application-server-could-not-be-contacted>
configuration - pgadmin4 : postgresql application server could not be 
contacted. - Stack 
Overflow<https://stackoverflow.com/questions/43211296/pgadmin4-postgresql-application-server-could-not-be-contacted>
I have installed PostgreSQL 9.6.2 on my Windows 8.1. But the pgadmin4 is not 
able to contact the local server. I have tried several solutions suggested here 
in stackoverflow, tried to uninstall and reinstall PostgreSQL 9.6.2 , tried to 
modify the config.py, config_distro.py, and delete the files in Roaming 
folder,i tried standalone pgadmin4 installation, but no success.However, in my 
local ...
stackoverflow.com