Re: pgBadger: Cannot find any log entries from systemd-journald
On Tue, Mar 5, 2024 at 3:14 AM Frank Lanitz wrote: > $ pgbadger --journalctl "journalctl -u postgresql.service" > You could try adding --verbose to see if it gives more clues. > Having a look into the journal there is a lot of > None of the snippets from journald you showed were actually things pgbadger cares about, FWIW. You can get an idea of what is actually parsed by running "journalctl -u postgresql --output=short-iso" log_error_verbosity = 'verbose' > This is probably not needed, and there is a finite chance that the extra context is confusing pgbadger. Cheers, Greg
Re: Feature Request: log_line_prefix %h Counterpart
On Fri, Jan 26, 2024 at 1:44 AM Jawala wrote: > Looking for an escape sequence for log_line_prefix for the IP address that > traffic was destined to, i.e., the *local* address of the TCP connection, > natural counterpart to "remote IP address". > I made a quick proof of concept of this, will post a patch to -hackers soon and CC you. Cheers, Greg
Re: Feature Request: log_line_prefix %h Counterpart
Actually, will just post it here in case someone finds this thread and wants to follow along: https://www.postgresql.org/message-id/flat/CAKAnmmK-U%2BUicE-qbNU23K--Q5XTLdM6bj%2BgbkZBZkjyjrd3Ow%40mail.gmail.com
Re: Thoughts on user-defined types for talk at Postgres conference?
Greetings, * Guyren Howe (guy...@gmail.com) wrote: > But what *really* sets Postgres apart from comparable systems is user defined > types. I would like to carefully lay out how to define and use a user-defined > type (I don’t think I have time to dig into doing fancy stuff with C > functions, so just the basic “user defined sum type”), but also any gotchas. What I take it you're referring to here are most specifically called "Composite Types", as discussed here: https://www.postgresql.org/docs/current/sql-createtype.html > And I’d like to finish with some thoughts about when and how to use > user-defined types. My feeling is that this feature is greatly under-used, > mostly because it’s so non-standard. But AFAICT, user-defined types are fine > and other than some ugliness due to SQL (mainly needing parentheses in some > unexpected places), fields in a user defined type work perfectly well in > Postgres’s SQL. I guess you’d need to pull them apart for values returned to > clients, but that isn’t difficult. Yeah, it's a bit grotty when you're trying to reference them (as you note with the comment about the extra parentheses) but it's also a bit painful building them up to supply as input (though not impossible). > So, any gotchas with user defined types? Any thoughts about designing with > them? The biggest gotcha (which is a bit different from the usability issues, which we discuss above) from my perspective is the serialization of composite types- we use up a lot more space to store a composite type that looks like: CREATE TYPE twoint AS (a int, b int); then if we create a table as: CREATE TABLE t1 (a int, b int); Let's show this, follow the above two commands, do: CREATE TABLE t2 (c1 twoint); INSERT INTO t1 VALUES (1,2); INSERT INTO t2 VALUES ('(1,2)'); =*# select pg_column_size(a) from t1; pg_column_size 4 (1 row) =*# select pg_column_size(b) from t1; pg_column_size 4 (1 row) =*# select pg_column_size(a) from t2; pg_column_size 29 (1 row) We can see it for the whole row too: =*# select pg_column_size(t1.*) from t1; pg_column_size 32 (1 row) =*# select pg_column_size(t2.*) from t2; pg_column_size 53 (1 row) That's an additional 21 bytes, which is really quite a lot. What's included in those 21 bytes are the length (since it's now a variable-length column to PG, unlike the case with the individual int columns in the table where we know from the table structure the length of them), the type information (typmod if there is one and the OID of the composite type), some other info, and then the actual data of the two int values inside of the composite type. I've often wondered if there's a way to reduce this overhead, as I do think that plus some improvements on the usability side would go a long way to making composite types more appealing to users. Still, they are certainly handy in some instances, I just can't recommend heavy use of them for large data sets where size is particularly important (such as in heavy OLTP environments) due to their overhead. Thanks! Stephen signature.asc Description: PGP signature
Re: Thoughts on user-defined types for talk at Postgres conference?
Thanks for that! It seems as though a Composite Type having only fixed-length fields should be able to be regarded as a fixed-length value. On 6 Mar 2024 at 09:45 -0800, Stephen Frost , wrote: > Greetings, > > * Guyren Howe (guy...@gmail.com) wrote: > > But what *really* sets Postgres apart from comparable systems is user > > defined types. I would like to carefully lay out how to define and use a > > user-defined type (I don’t think I have time to dig into doing fancy stuff > > with C functions, so just the basic “user defined sum type”), but also any > > gotchas. > > What I take it you're referring to here are most specifically called > "Composite Types", as discussed here: > > https://www.postgresql.org/docs/current/sql-createtype.html > > > And I’d like to finish with some thoughts about when and how to use > > user-defined types. My feeling is that this feature is greatly under-used, > > mostly because it’s so non-standard. But AFAICT, user-defined types are > > fine and other than some ugliness due to SQL (mainly needing parentheses in > > some unexpected places), fields in a user defined type work perfectly well > > in Postgres’s SQL. I guess you’d need to pull them apart for values > > returned to clients, but that isn’t difficult. > > Yeah, it's a bit grotty when you're trying to reference them (as you > note with the comment about the extra parentheses) but it's also a bit > painful building them up to supply as input (though not impossible). > > > So, any gotchas with user defined types? Any thoughts about designing with > > them? > > The biggest gotcha (which is a bit different from the usability issues, > which we discuss above) from my perspective is the serialization of > composite types- we use up a lot more space to store a composite type > that looks like: > > CREATE TYPE twoint AS (a int, b int); > > then if we create a table as: > > CREATE TABLE t1 (a int, b int); > > Let's show this, follow the above two commands, do: > > CREATE TABLE t2 (c1 twoint); > > INSERT INTO t1 VALUES (1,2); > INSERT INTO t2 VALUES ('(1,2)'); > > =*# select pg_column_size(a) from t1; > pg_column_size > > 4 > (1 row) > > =*# select pg_column_size(b) from t1; > pg_column_size > > 4 > (1 row) > > =*# select pg_column_size(a) from t2; > pg_column_size > > 29 > (1 row) > > We can see it for the whole row too: > > =*# select pg_column_size(t1.*) from t1; > pg_column_size > > 32 > (1 row) > > =*# select pg_column_size(t2.*) from t2; > pg_column_size > > 53 > (1 row) > > That's an additional 21 bytes, which is really quite a lot. What's > included in those 21 bytes are the length (since it's now a > variable-length column to PG, unlike the case with the individual int > columns in the table where we know from the table structure the length > of them), the type information (typmod if there is one and the OID of > the composite type), some other info, and then the actual data of the > two int values inside of the composite type. > > I've often wondered if there's a way to reduce this overhead, as I do > think that plus some improvements on the usability side would go a long > way to making composite types more appealing to users. Still, they are > certainly handy in some instances, I just can't recommend heavy use of > them for large data sets where size is particularly important (such as > in heavy OLTP environments) due to their overhead. > > Thanks! > > Stephen
extract ddl to devops pipeline
Hello guys, I need to export the DDL (tables, funcitons views) of some schemas (with cross references) to load them in a pipeline like DevOps ready. Problem: export/backup doesn't care about the appropriate sequence of objet because these stuff will be solved by import phase. So there is a way to automatically generate DDL in the right order? I mean, if function Foo reference in input/output definition (also i declare section?) to table Bar, I've need to create the table before the function. And if Bar uses function Lir (in trigger? check? else?), Lir must be create before Bar.
Re: extract ddl to devops pipeline
> On Mar 6, 2024, at 13:18, Lorusso Domenico wrote: > So there is a way to automatically generate DDL in the right order? Standard pg_dump creates files that are in the proper order, although if you exclusive some tables or schemas from the backup, those might cause errors if references from the objects you *do* import.
v11.5- v15.3 upgrade (linux)
Hi: I'm a PG user in a big corp with an IT dept that administers a PG server/instance that I use. It's an old install, v11.5, and we need to upgrade to v15.3. They want to bring the upgraded DB up on a new linux vm which has OS upgrades of its own. So it's a move AND an upgrade. There are 2 concerns First has to do with a jump from 11.5 - 15.3 ? Is it safe to do this given so many major intermediate versions being skipped ? Second has to do with the 11.5 having the perlplu extension installed. The DBA created a v15.3 instance on the new server and tried to restore an 11.5 backup onto/into the 15.3 (as an experiment) but got several error messages like this... "ERROR: extension "plperlu" is not available". I really didn't need any of the perlplu procs/funcs I created, so I dropped them all but the error persists. I suggested to drop the perlplu extension in the 11.5 (drop extension perlplu cascade) but there is a concern that it might break something. So the question is... Will dropping the perlplu extension break anything (given that there are no perlplu procs/funcs). Just to be safe, "select l.lanname,count(*) from pg_proc p, pg_language l WHERE p.prolang = l.oid group by 1;" shows that there are no perlplu procs/fns. Finally, what is the best approach to making the server move AND PG upgrade... 1) backup the 11.5 and restore into a 15.3 PG instance on the upgraded server ? 2) upgrade the 11.3 DB to 15.3, back that up and then restore on the upgraded server ? 3) upgrade the 11.3 DB to 15.3, then set the 15.3 destination server as a replicated DB (let it populate) then designate the destination server as the primary then cut the old server loose ? (this approach would probably minimize DB downtime by a lot I would think) Any other suggestions ? Thanks for any advise/help !
Re: extract ddl to devops pipeline
On 3/6/24 13:18, Lorusso Domenico wrote: Hello guys, I need to export the DDL (tables, funcitons views) of some schemas (with cross references) to load them in a pipeline like DevOps ready. Problem: export/backup doesn't care about the appropriate sequence of objet because these stuff will be solved by import phase. pg_dump -Fc ... -f dump_file.out From here https://www.postgresql.org/docs/current/app-pgrestore.html " -l --list List the table of contents of the archive. The output of this operation can be used as input to the -L option. Note that if filtering switches such as -n or -t are used with -l, they will restrict the items listed. " Then: pg_restore -l -f toc_list.txt dump_file.out Then from link above: " -L list-file --use-list=list-file Restore only those archive elements that are listed in list-file, and restore them in the order they appear in the file. Note that if filtering switches such as -n or -t are used with -L, they will further restrict the items restored. list-file is normally created by editing the output of a previous -l operation. Lines can be moved or removed, and can also be commented out by placing a semicolon (;) at the start of the line. See below for examples. " Open toc_list.txt in text editor and comment(;) out the items you don't want and then feed the edited TOC back to pg_restore as: pg_restore -L toc_list.txt -f edited_dump.sql dump_file.out Where edited_dump.sql will be a plain text dump file with the DDL statements. So there is a way to automatically generate DDL in the right order? I mean, if function Foo reference in input/output definition (also i declare section?) to table Bar, I've need to create the table before the function. And if Bar uses function Lir (in trigger? check? else?), Lir must be create before Bar. -- Adrian Klaver adrian.kla...@aklaver.com
Re: v11.5- v15.3 upgrade (linux)
On 3/6/24 16:19, David Gauthier wrote: Hi: I'm a PG user in a big corp with an IT dept that administers a PG server/instance that I use. It's an old install, v11.5, and we need to upgrade to v15.3. They want to bring the upgraded DB up on a new linux vm which has OS upgrades of its own. So it's a move AND an upgrade. There are 2 concerns First has to do with a jump from 11.5 - 15.3 ? Is it safe to do this given so many major intermediate versions being skipped ? I would definitely read the release notes for 12.0, 13.0, 14.0 and 15.0. They will show the breaking changes. Second has to do with the 11.5 having the perlplu extension installed. The DBA created a v15.3 instance on the new server and tried to restore an 11.5 backup onto/into the 15.3 (as an experiment) but got several error messages like this... "ERROR:extension "plperlu" is not available". I really didn't need any of the perlplu procs/funcs I How are and from where are you installing the Postgres package(s)? Generally the procedural languages packages are separate from the server package. I'm betting the error is occurring because the plperlu package was not installed for the 15.3 instance of Postgres. created, so I dropped them all but the error persists. I suggested to drop the perlplu extension in the 11.5 (drop extension perlplu cascade) but there is a concern that it might break something. So the question is... Will dropping the perlplu extension break anything (given that there are no perlplu procs/funcs). Just to be safe, "select l.lanname,count(*) from pg_proc p, pg_language l WHERE p.prolang = l.oid group by 1;" shows that there are no perlplu procs/fns. The would be plan 2 if installing the plperlu package is not possible. Finally, what is the best approach to making the server move AND PG upgrade... 1) backup the 11.5 and restore into a 15.3 PG instance on the upgraded server ? I would backup the 11.5 instance whatever else you plan to do, better safe then sorry. 2) upgrade the 11.3 DB to 15.3, back that up and then restore on the upgraded server ? 3) upgrade the 11.3 DB to 15.3, then set the 15.3 destination server as a replicated DB (let it populate) then designate the destination server as the primary then cut the old server loose ? (this approach would probably minimize DB downtime by a lot I would think) The above is going to depend on the size of the database and any possible breaking/behavioral changes you find in the release notes. If you have the ability to do a test migration on a test machine where oops are not a problem that would be way to verify the 2) and 3) options. Any other suggestions ? Thanks for any advise/help ! -- Adrian Klaver adrian.kla...@aklaver.com
Re: v11.5- v15.3 upgrade (linux)
On Wed, Mar 6, 2024 at 07:19:20PM -0500, David Gauthier wrote: > Hi: > I'm a PG user in a big corp with an IT dept that administers a PG server/ > instance that I use. It's an old install, v11.5, and we need to upgrade to > v15.3. They want to bring the upgraded DB up on a new linux vm which has OS > upgrades of its own. So it's a move AND an upgrade. There are 2 concerns > > First has to do with a jump from 11.5 - 15.3 ? Is it safe to do this given so > many major intermediate versions being skipped ? Why are you going to 15.3 when 15.6 is the most recent minor PG 15 version? https://www.postgresql.org/support/versioning/ -- Bruce Momjian https://momjian.us EDB https://enterprisedb.com Only you can decide what is important to you.
Re: v11.5- v15.3 upgrade (linux)
On Wed, Mar 6, 2024 at 7:19 PM David Gauthier wrote: > Hi: > I'm a PG user in a big corp with an IT dept that administers a PG > server/instance that I use. It's an old install, v11.5, and we need to > upgrade to v15.3. They want to bring the upgraded DB up on a new linux vm > which has OS upgrades of its own. So it's a move AND an upgrade. > Have them upgrade the current server to 11*.22*. It's trivially easy, with only a few minutes of down time. Remember (and tell your IT dept) that PG point releases do not add new features: only bug fixes. That means point releases aren't nearly as risky as other products.
Re: Thoughts on user-defined types for talk at Postgres conference?
Greetings, * Guyren Howe (guy...@gmail.com) wrote: > Thanks for that! We'd generally prefer that responses are in-line (such as I'm doing here) and not top-posted (as you did on your response). This helps others reading the lists follow the discussion. > It seems as though a Composite Type having only fixed-length fields should be > able to be regarded as a fixed-length value. ... and I nearly missed this comment due to the above. While I can understand where you're coming from with that, it's certainly not something the system is built to handle today and, at least from my recent review of this area, looks like there would be a fair bit of code involved to get to that point. I do agree with you that this would be very nice to have and perhaps someone will find interest and time to work on moving us in that direction in the future. Thanks! Stephen signature.asc Description: PGP signature
Re: Thoughts on user-defined types for talk at Postgres conference?
Stephen Frost writes: > * Guyren Howe (guy...@gmail.com) wrote: >> It seems as though a Composite Type having only fixed-length fields should >> be able to be regarded as a fixed-length value. > While I can understand where you're coming from with that, it's > certainly not something the system is built to handle today and, at > least from my recent review of this area, looks like there would be a > fair bit of code involved to get to that point. There are some other fundamental problems here: * To conclude that a tuple of only fixed-length column types is itself fixed-length, you need an additional assumption that all the columns are not-null. This is pretty problematic because we don't even have a way to enforce such a requirement against generic composite types; we really only promise to enforce column constraints against columns of physical tables. * Even if all the semantic enforcement of that were bulletproof, we'd still have a big problem with compatibility of the on-disk representation between current releases and a future release that thought it could elide some part of the tuple header for all-not-null stored composite values. I don't say that these problems are insoluble, but they do look pretty difficult. regards, tom lane