Re: [bug]? insert returning composite type fails
Adrian come on 🤠This is a reduced example. The real usecase involves many tables with the bitemporal record However I solved using a record type ... Il ven 7 lug 2023, 01:20 Adrian Klaver ha scritto: > On 7/6/23 14:52, Lorusso Domenico wrote: > > Hello guys, > > In my db (version 15) I've defined a composite type with some domains > > > > CREATE DOMAIN my_feat.audit_record_jsonb_domain > > AS jsonb > > NOT NULL; > > > > ALTER DOMAIN my_feat.audit_record_jsonb_domain OWNER TO postgres; > > > > CREATE DOMAIN my_feat.boolean_true_domain > > AS boolean > > DEFAULT true > > NOT NULL; > > > > ALTER DOMAIN my_feat.boolean_true_domain OWNER TO postgres; > > CREATE TYPE my_feat.bitemporal_record AS > > ( > > user_ts_range tstzrange, > > db_ts_range tstzrange, > > has_future_record timestamp with time zone, > > audit_record my_feat.audit_record_jsonb_domain, > > is_valid my_feat.boolean_true_domain > > ); > > > > ALTER TYPE my_feat.bitemporal_record > > OWNER TO postgres; > > So I've a table like that: > > CREATE TABLE IF NOT EXISTS my_feat.try_bt_info > > ( > > id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 > > START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ), > > bt_info my_feat.bitemporal_record, > > CONSTRAINT try_bt_info_pk PRIMARY KEY (id) > > ) > > Seems a long way around to arrive at: > > CREATE TABLE IF NOT EXISTS my_feat.try_bt_info > ( > id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START > 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ) PRIMARY KEY, > user_ts_range tstzrange, > db_ts_range tstzrange, > has_future_record timestamp with time zone, > is_valid boolean NOT NULL 't' > ); > > > > -- > > Domenico L. > > > > per stupire mezz'ora basta un libro di storia, > > io cercai di imparare la Treccani a memoria... [F.d.A.] > > -- > Adrian Klaver > adrian.kla...@aklaver.com > >
temp table security labels
I'm seeing security labels set on db objects when a temporary table is created but they don't match the labels I've configured in my sepgsql_contexts file. Our code doesn't actually execute a "SECURITY LABEL ..." command for the temp tables but something else must be doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at how the objects are being labeled, can anyone help me understand how this is happening? Ted
Re: temp table security labels
Also AFAICT the schema created for the temp table does not get deleted at the end of the session which also causes issues. On Fri, Jul 7, 2023 at 7:43 AM Ted Toth wrote: > > I'm seeing security labels set on db objects when a temporary table is > created but they don't match the labels I've configured in my > sepgsql_contexts file. Our code doesn't actually execute a "SECURITY > LABEL ..." command for the temp tables but something else must be > doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at > how the objects are being labeled, can anyone help me understand how > this is happening? > > Ted
track_activity_query_size max practical size?
We've got some Very Large Queries that take a long time. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am concerned about side effects of setting it to 48KB or even 64KB. -- Born in Arizona, moved to Babylonia.
Re: track_activity_query_size max practical size?
On 7/7/23 07:42, Ron wrote: We've got some Very Large Queries that take a long time. An EXPLAIN(ANALYZE BUFFERS) would go a long way here. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am It is before coffee here, so you will need to spell out what taqs is? concerned about side effects of setting it to 48KB or even 64KB. -- Adrian Klaver adrian.kla...@aklaver.com
Re: track_activity_query_size max practical size?
On 7/7/23 07:51, Adrian Klaver wrote: On 7/7/23 07:42, Ron wrote: We've got some Very Large Queries that take a long time. An EXPLAIN(ANALYZE BUFFERS) would go a long way here. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am It is before coffee here, so you will need to spell out what taqs is? Did I mention lack of coffee? I see track_activity_query_size in the subject. So the issue is what with the current setting? concerned about side effects of setting it to 48KB or even 64KB. -- Adrian Klaver adrian.kla...@aklaver.com
Re: track_activity_query_size max practical size?
On 7/7/23 09:51, Adrian Klaver wrote: On 7/7/23 07:42, Ron wrote: We've got some Very Large Queries that take a long time. An EXPLAIN(ANALYZE BUFFERS) would go a long way here. You can't run EXPLAIN(ANALYZE BUFFERS) if you don't have a query to run. That's what track_activity_query_size is for. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am It is before coffee here, so you will need to spell out what taqs is? track_activity_query_size concerned about side effects of setting it to 48KB or even 64KB. -- Born in Arizona, moved to Babylonia.
Re: track_activity_query_size max practical size?
On 7/7/23 09:55, Adrian Klaver wrote: On 7/7/23 07:51, Adrian Klaver wrote: On 7/7/23 07:42, Ron wrote: We've got some Very Large Queries that take a long time. An EXPLAIN(ANALYZE BUFFERS) would go a long way here. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am It is before coffee here, so you will need to spell out what taqs is? Did I mention lack of coffee? I see track_activity_query_size in the subject. So the issue is what with the current setting? 10KB isn't enough to capture the whole query. Are there any bad side effects of increasing it from 10KB to 48KB or 64KB? concerned about side effects of setting it to 48KB or even 64KB. -- Born in Arizona, moved to Babylonia.
Re: track_activity_query_size max practical size?
On 7/7/23 07:58, Ron wrote: On 7/7/23 09:55, Adrian Klaver wrote: On 7/7/23 07:51, Adrian Klaver wrote: On 7/7/23 07:42, Ron wrote: We've got some Very Large Queries that take a long time. An EXPLAIN(ANALYZE BUFFERS) would go a long way here. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am It is before coffee here, so you will need to spell out what taqs is? Did I mention lack of coffee? I see track_activity_query_size in the subject. So the issue is what with the current setting? 10KB isn't enough to capture the whole query. Are there any bad side effects of increasing it from 10KB to 48KB or 64KB? So to be clear the text of the query is large and not all of it is being being shown in pg_stat_activity.query. I have never had the need to increase that setting so all I can do is speculate. From the docs 'Specifies the amount of memory reserved to store the text of the currently executing command for each active session, ...'. So given a lot of active sessions with each query being a large text value there could be a side effect at some point. The bigger issue would seem to be 'This parameter can only be set at server start'. So you are looking at starting/stopping the server to find the sweet spot. concerned about side effects of setting it to 48KB or even 64KB. -- Adrian Klaver adrian.kla...@aklaver.com
Re: track_activity_query_size max practical size?
On 7/7/23 10:13, Adrian Klaver wrote: On 7/7/23 07:58, Ron wrote: On 7/7/23 09:55, Adrian Klaver wrote: On 7/7/23 07:51, Adrian Klaver wrote: On 7/7/23 07:42, Ron wrote: We've got some Very Large Queries that take a long time. An EXPLAIN(ANALYZE BUFFERS) would go a long way here. Even setting taqs to 10KB isn't adequate, so I want to significantly bump it, but am It is before coffee here, so you will need to spell out what taqs is? Did I mention lack of coffee? I see track_activity_query_size in the subject. So the issue is what with the current setting? 10KB isn't enough to capture the whole query. Are there any bad side effects of increasing it from 10KB to 48KB or 64KB? So to be clear the text of the query is large and not all of it is being being shown in pg_stat_activity.query. I have never had the need to increase that setting so all I can do is speculate. From the docs 'Specifies the amount of memory reserved to store the text of the currently executing command for each active session, ...'. So given a lot of active sessions with each query being a large text value there could be a side effect at some point. The bigger issue would seem to be 'This parameter can only be set at server start'. So you are looking at starting/stopping the server to find the sweet spot. Restarting is something I can do during a quiet point after the bulk of the day's work is complete, but before the nightly backups. concerned about side effects of setting it to 48KB or even 64KB. -- Born in Arizona, moved to Babylonia.
Re: temp table security labels
I see it now sepgsql_schema_post_create is doing it ... it's just not doing what I think it should :( On Fri, Jul 7, 2023 at 8:05 AM Ted Toth wrote: > > Also AFAICT the schema created for the temp table does not get deleted > at the end of the session which also causes issues. > > On Fri, Jul 7, 2023 at 7:43 AM Ted Toth wrote: > > > > I'm seeing security labels set on db objects when a temporary table is > > created but they don't match the labels I've configured in my > > sepgsql_contexts file. Our code doesn't actually execute a "SECURITY > > LABEL ..." command for the temp tables but something else must be > > doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at > > how the objects are being labeled, can anyone help me understand how > > this is happening? > > > > Ted
Re: temp table security labels
seems to me that sepgsql_schema_post_create should be doing a selabel_lookup and using that default label if one exists instead of computing the label ... hmm I'll have to think about it On Fri, Jul 7, 2023 at 10:44 AM Ted Toth wrote: > > I see it now sepgsql_schema_post_create is doing it ... it's just not > doing what I think it should :( > > On Fri, Jul 7, 2023 at 8:05 AM Ted Toth wrote: > > > > Also AFAICT the schema created for the temp table does not get deleted > > at the end of the session which also causes issues. > > > > On Fri, Jul 7, 2023 at 7:43 AM Ted Toth wrote: > > > > > > I'm seeing security labels set on db objects when a temporary table is > > > created but they don't match the labels I've configured in my > > > sepgsql_contexts file. Our code doesn't actually execute a "SECURITY > > > LABEL ..." command for the temp tables but something else must be > > > doing so because I see pg_temp_NNN in pg_seclabels. So I'm confused at > > > how the objects are being labeled, can anyone help me understand how > > > this is happening? > > > > > > Ted
Re: track_activity_query_size max practical size?
On 7/7/23 08:27, Ron wrote: Restarting is something I can do during a quiet point after the bulk of the day's work is complete, but before the nightly backups. For completeness there is the auto_explain module: https://www.postgresql.org/docs/current/auto-explain.html -- Adrian Klaver adrian.kla...@aklaver.com
pgbouncer best practices
I have an application that does many db calls from a server farm. I've increased my max connections on postgresql to 1000 and tuned the server accordingly. However, I still get can't connect to postgresql some times. I installed pgbouncer on few servers in the farm. I pointed the traffic to pgbouncer and things helped dramatically. My question are: is there a ratio of max connections and pool i should use in my pgbouncer config? -- --- Get your facts first, then you can distort them as you please.--
Re: pgbouncer best practices
Rita wrote on 7/7/23 9:23 AM: I have an application that does many db calls from a server farm. I've increased my max connections on postgresql to 1000 and tuned the server accordingly. However, I still get can't connect to postgresql some times. I installed pgbouncer on few servers in the farm. I pointed the traffic to pgbouncer and things helped dramatically. My question are: is there a ratio of max connections and pool i should use in my pgbouncer config? Does your application allow for transaction pooling? pgBouncer can unlock a lot more efficiency if you can cycle server slots on transaction boundaries.
Re: pgbouncer best practices
On Fri, Jul 7, 2023 at 9:24 AM Rita wrote: > > My question are: is there a ratio of max connections and pool i should use in > my pgbouncer config? Short answer: No, there's no recommended ratio for PG max_connections and pgbouncer pool size. In pgbouncer, a client connection is matched (forwarded) to a server connection in the pool. But if all the server connections in the pool are already busy servicing other client connections, then any newly active client connections are made to wait until a server connection is available. Depending on your workload, there are recommendations for how large to set the max_connections on Postgres. But for connection poolers like pgbouncer, there really isn't a recommended upper limit. Since every connection (whether idle or active) takes up memory (~2kB), the only upper limit is how much RAM you have available for pgbouncer to use. It's been a while since I used pgbouncer, so there might be inconsistencies here. Someone with more current knowledge may want to correct me. Best regards, Gurjeet http://Gurje.et