Re: vacuumdb not letting me connect to db
> On Feb 5, 2021, at 9:37 AM, Ron wrote: > >> On 2/5/21 10:22 AM, Rob Sargent wrote: >> >> >>> On 2/5/21 9:11 AM, Ron wrote: >>> Obviously... don't use 300 threads. >>> >> No, no Ron. Clearly the answer is more CPUs > > I hope you're being sarcastic. > > -- > Angular momentum makes the world go 'round. > Ron, I was being sarcastic, but I’ve gone over the thread and I do not see any mention of the number of cores on the server. If it has a single quad core chip then the queue for each processor could be on average 75 deep at the outset, all of them anxiously awaiting data. -j connections are processes, correct? Not threads. Atul, if you list the tools you know and have used for watching system performance and then others can perhaps suggest alternatives (or re-interpret results)
Re: stale WAL files?
> On Mar 29, 2019, at 6:58 AM, Michael Paquier wrote: > >> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote: >> This is pg10 so it's pg_wal. ls -ltr >> >> >> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >> 00010CEA00B1 >> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >> 00010CEA00B2 >> >> ... 217 more on through to ... >> >> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >> 00010CEA00E8 >> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >> 00010CEA00E9 >> -rw---. 1 postgres postgres 16777216 Mar 28 09:46 >> 00010CEA000E > > In Postgres 10 and older versions, the server keeps WAL segment for > the last completed segment, and the previous completed segment. So > even if a checkpoint is issued, the current WAL insert point is never > really going to be on the first segment in pg_wal. Isn't that the > origin of what you think is a problem? So, say, if you issue a > checkpoint again, don't you see 00010CEA00B1 going away? I had CEA00015, generated today, as only entry since Mar 16. Issued checkpoint, soon had CEA016 and 15. Five minutes later I still hav 15 and 16 (and of course all 271 from Mar 16). > > In Postgres 11, WAL segments worth only one checkpoint are kept > around. > -- > Michael
Re: stale WAL files?
> On Mar 29, 2019, at 6:58 AM, Michael Paquier wrote: > >> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote: >> This is pg10 so it's pg_wal. ls -ltr >> >> >> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >> 00010CEA00B1 >> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >> 00010CEA00B2 >> >> ... 217 more on through to ... >> >> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >> 00010CEA00E8 >> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >> 00010CEA00E9 >> -rw---. 1 postgres postgres 16777216 Mar 28 09:46 >> 00010CEA000E > Today there are 210 Mar 16 WAL files versus the originally reported 271. I cannot at this point confirm the original count, other that to say I used “ls -ltr | grep ‘Mar 16’ | wc -l” to get both numbers. Is it interesting that the earliest files (by ls time stamp) are not lowest numerically? Those two file names end “B[12]” (written at 16:33) in a range across the directory from “1A” through “E9”? “B0” was written at16:53 and “B3” was written at 16:54 Is there any analysis of the file names I could do which might shed any light on the issue?
Re: stale WAL files?
> On Mar 29, 2019, at 6:58 AM, Michael Paquier wrote: > >> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote: >> This is pg10 so it's pg_wal. ls -ltr >> >> >> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >> 00010CEA00B1 >> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >> 00010CEA00B2 >> >> ... 217 more on through to ... >> >> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >> 00010CEA00E8 >> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >> 00010CEA00E9 >> -rw---. 1 postgres postgres 16777216 Mar 28 09:46 >> 00010CEA000E > I’m now down to 208 Mar 16 WAL files so they are being processed (at least deleted). I’ve taken a snapshot of the pg_wal dir such that I can see which files get processed. It’s none of the files I’ve listed previously
Re: stale WAL files?
> On Mar 30, 2019, at 10:54 AM, Gmail wrote: > > >>>> On Mar 29, 2019, at 6:58 AM, Michael Paquier wrote: >>> >>> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote: >>> This is pg10 so it's pg_wal. ls -ltr >>> >>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >>> 00010CEA00B1 >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >>> 00010CEA00B2 >>> >>> ... 217 more on through to ... >>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >>> 00010CEA00E8 >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >>> 00010CEA00E9 >>> -rw---. 1 postgres postgres 16777216 Mar 28 09:46 >>> 00010CEA000E > I’m now down to 208 Mar 16 WAL files so they are being processed (at least > deleted). I’ve taken a snapshot of the pg_wal dir such that I can see which > files get processed. It’s none of the files I’ve listed previously Two more have been cleaned up. 001C and 001D generated at 16:38 Mar 16
Re: stale WAL files?
I’m under fighting a nasty cold. It may take a day or two for me to response to recent questions and suggestion.
Re: stale WAL files?
Sorry folks, I’m still home nursing a nasty chest cold and my only tool today is an iPad. I have failed to get the postgresql.conf into the copy buffer so that, along with the results of pg_settings, will have to wait for another day. Today there are “only” 135 Mar 16 WAL files. I haven’t sorted out which have been cleaned up but can do so if that’s thought to be helpful. There is still 2.2G in the pg_wal directory but that disc has ~360G left. (I believe the burst of WAL files was the result of a novice using LIMIT with a to-Json function and the target table has >100M rows.) Given that current WALs come and go regularly, I think the CHECKPOINT is running frequently enough (for normal loads at least). > On Apr 1, 2019, at 9:20 PM, Rene Romero Benavides > wrote: > > >> On Mon, Apr 1, 2019 at 6:30 PM Rene Romero Benavides >> wrote: >> >>> On Sat, Mar 30, 2019 at 5:03 PM Gmail wrote: >>> >>> >>> > On Mar 30, 2019, at 10:54 AM, Gmail wrote: >>> > >>> > >>> >>>> On Mar 29, 2019, at 6:58 AM, Michael Paquier >>> >>>> wrote: >>> >>> >>> >>> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote: >>> >>> This is pg10 so it's pg_wal. ls -ltr >>> >>> >>> >>> >>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >>> >>> 00010CEA00B1 >>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33 >>> >>> 00010CEA00B2 >>> >>> >>> >>> ... 217 more on through to ... >>> >>> >>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >>> >>> 00010CEA00E8 >>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01 >>> >>> 00010CEA00E9 >>> >>> -rw---. 1 postgres postgres 16777216 Mar 28 09:46 >>> >>> 00010CEA000E >>> > I’m now down to 208 Mar 16 WAL files so they are being processed (at >>> > least deleted). I’ve taken a snapshot of the pg_wal dir such that I can >>> > see which files get processed. It’s none of the files I’ve listed >>> > previously >>> >>> Two more have been cleaned up. 001C and 001D generated at 16:38 Mar 16 >> Please share your complete postgresql.conf file and the results from this >> query: >> SELECT * FROM pg_settings; >> has someone in the past configured wal archiving? >> You've ran out of disk space as this log message you shared states: >> No space left on device >> what's the output of df -h >> >> -- >> El genio es 1% inspiración y 99% transpiración. >> Thomas Alva Edison >> http://pglearn.blogspot.mx/ > > BTW , how spread apart are checkpoints happening? do you have stats on that? > maybe they're too spread apart and that's why WAL files cannot be recycled > rapidly enough? > -- > El genio es 1% inspiración y 99% transpiración. > Thomas Alva Edison > http://pglearn.blogspot.mx/ >
Re: How different is AWS-RDS postgres?
Not important who does what with whom re Catalina/Tomcat ;) I will indeed re-examine timeouts and such as inserting 100sK record is not instantaneous by any stretch. Tomcat is the new kid on my block as prior to this release I managed a naked port with a Selector and that had no trouble with this same test data. My biggest concern is the lack of server side indications. I also need to confirm the table create/bulk-copy and update to target table (segment) are in separate transactions. Doesn’t look like it, but they’re supposed to be. I want the data written to the db in bulk, then come back round and load to the final table in chunks. > On May 27, 2021, at 5:20 PM, Sam Gendler wrote: > > Unless something has changed in recent years, the core servlet engine of > tomcat IS catalina. Embedded tomcat is embedded catalina. It looks like the > I/O error is a result of attempting to send a query on an already dead > connection. I'd look for something that is limiting how long a connection > can be open - either an explicit or default value for a timeout in the > connection pool or on the server side. If you don't get the same behaviour > when running against a database locally, I'd definitely look at the default > settings in RDS. It may be automatically closing connections if they are > idle for even a brief period. > >> On Thu, May 27, 2021 at 3:35 PM Rob Sargent wrote: >>> On 5/27/21 4:25 PM, Sam Gendler wrote: >>> That sure looks like something is causing your connection to have a >>> transaction rollback. I haven't worked in Java in far too long, but it >>> seems like your connection pool is under the impression your connection was >>> abandoned so it reclaims it and rollback the transaction, which would >>> explain why you aren't seeing the table when all is said and done - all of >>> the work is being undone at the end. >>> >>> One possibility, based on the catalina log you provided - if you have >>> either end of the connection set up to automatically close idle connections >>> after a period of time, then you might receive a closed connection from the >>> pool, which will just error out when you attempt to run a query. In which >>> case, you need to set up your connection pool to test a connection before >>> it returns it to the requester. Usually something as simple as "select 2" >>> will be sufficient to determine if the database connection is open. I can >>> just about guarantee that your connection pool has a parameter which allows >>> you to specify a query to execute when a connection is requested. >>> >> >> Well I /was/ doing >> contextResource.setProperty("validationQuery", >> "SELECT 1"); >> but I see that I lost that when I switched to using a properties file. >> Thanks for point me there. >> >> The loop of 16 insert statement is in a single transaction, single >> connection so I'm not sure who's choking first. Is the connection idle >> after the I/O error or is the I/O error from a dead connection? (Small >> disclaimer: there is no catalina involved here, just an embedded tomcat >> instance.) >> >> >> >>
building a server
So I revisited https://momjian.us/main/writings/pgsql/hw_selection.pdf Near the end (page 24) I spotted: Heavy use of server-side functions might generate significant CPU load Just want to confirm that this referring to workload on the mainly-single-threaded server not an advocacy for more cores. Thanks PS The wiki page at https://wiki.postgresql.org/wiki/Database_Hardware lists a reference to a PDF by Greg Smith. The url vectors off into some proprietary e-book enticement. Is that intentional?
Re: Display table entries using partial column entry
> On Dec 10, 2017, at 4:10 PM, Melvin Davidson wrote: > > > >> On Sun, Dec 10, 2017 at 4:50 PM, Sherman Willden >> wrote: >> The database table has five columns; 'id aria artist a_artist album_title'. >> I assume I will use a SELECT * FROM aria_precis WHERE type command. >> >> I want to access all arias that have mio ben within the aria title. In the >> example below I want to display cbiylm06 and cbiylm10. >> >> cbiylm06#Caro mio ben#Cecilia Bartoli##Se tu m'ami >> cbiylm07#Pur dicesti, o bocca bella#Cecilia Bartoli##Se tu m'ami >> cbiylm08#Intorno all'idol mio#Cecilia Bartoli##Se tu m'ami >> cbiylm09#Nel cor più non mi sento#Cecilia Bartoli##Se tu m'ami >> cbiylm10#Il mio ben quando ve#Cecilia Bartoli##Se tu m'ami >> cbiylm11#O Leggiadri Occhi Belli#Cecilia Bartoli##Se tu m'ami >> cbiylm12#Il mio bel foco#Cecilia Bartoli##Se tu m'ami >> >> Thank you; >> >> Sherman > > Sherman, > In the future, please be kind enough to provide PostgreSQL version and O/S > when posting to this listing. > Also include COLUMN HEADERS with all data and present the data formatted so > it is easily readable by humans. > > Presuming the # is your column divider, and the data you have provided looks > like this: > id#aria #artist > #a_artist #album_title > cbiylm06#Caro mio ben #Cecilia Bartoli # > #Se tu m'ami > cbiylm07#Pur dicesti, o bocca bella#Cecilia Bartoli # > #Se tu m'ami > cbiylm08#Intorno all'idol mio #Cecilia Bartoli # > #Se tu m'ami > cbiylm09#Nel cor più non mi sento #Cecilia Bartoli# >#Se tu m'ami > cbiylm10#Il mio ben quando ve #Cecilia Bartoli # > #Se tu m'ami > cbiylm11#O Leggiadri Occhi Belli#Cecilia Bartoli # > #Se tu m'ami > cbiylm12#Il mio bel foco #Cecilia Bartoli # >#Se tu m'ami > > Then all you really need is: > SELECT * >FROM aria_precis > WHERE aria LIKE '%mio ben%'; > > Pattern Matching > https://www.postgresql.org/docs/9.6/static/functions-matching.html Where I hop you will find the tilde operator ('~', '~*'). Why more fun. > > -- > Melvin Davidson > I reserve the right to fantasize. Whether or not you > wish to share my fantasy is entirely up to you.
Re: Delete a table automatic?
Hi, I too agree with Adrian Klaver, the trigger is the only option Thanks, V Muralidharan +91 9940302900 On Tue, 1 Nov 2022, 19:46 Peter J. Holzer, wrote: > On 2022-11-01 07:41:14 -0600, Rob Sargent wrote: > > On 11/1/22 03:31, jian he wrote: > > > > On Tue, Nov 1, 2022 at 2:33 PM 黄宁 wrote: > > > > I now have two tables named A and B. Table B is calculated based > on the > > data of table A. I wonder if table B can be automatically > deleted when > > table A is deleted? > [...] > > you can use DROP TABLE CASCADE. > > DROP TABLE manual: https://www.postgresql.org/docs/current/ > > sql-droptable.html > > > > > > > > Only If B has a foreign key reference to A > > And even then it only drops the constraint, not the table (or the data): > > hjp=> create table a (id serial primary key, t text); > CREATE TABLE > hjp=> create table b (id serial primary key, a int references a, t text); > CREATE TABLE > hjp=> \d a > Table "public.a" > ╔╤═╤═══╤══╤═══╗ > ║ Column │ Type │ Collation │ Nullable │Default║ > ╟┼─┼───┼──┼───╢ > ║ id │ integer │ │ not null │ nextval('a_id_seq'::regclass) ║ > ║ t │ text│ │ │ ║ > ╚╧═╧═══╧══╧═══╝ > Indexes: > "a_pkey" PRIMARY KEY, btree (id) > Referenced by: > TABLE "b" CONSTRAINT "b_a_fkey" FOREIGN KEY (a) REFERENCES a(id) > > hjp=> \d b > Table "public.b" > ╔╤═╤═══╤══╤═══╗ > ║ Column │ Type │ Collation │ Nullable │Default║ > ╟┼─┼───┼──┼───╢ > ║ id │ integer │ │ not null │ nextval('b_id_seq'::regclass) ║ > ║ a │ integer │ │ │ ║ > ║ t │ text│ │ │ ║ > ╚╧═╧═══╧══╧═══╝ > Indexes: > "b_pkey" PRIMARY KEY, btree (id) > Foreign-key constraints: > "b_a_fkey" FOREIGN KEY (a) REFERENCES a(id) > > [some inserts later] > > hjp=> select * from b; > ╔╤═══╤══╗ > ║ id │ a │ t ║ > ╟┼───┼──╢ > ║ 1 │ 1 │ foo1 ║ > ║ 2 │ 1 │ foo2 ║ > ║ 3 │ 2 │ bar1 ║ > ╚╧═══╧══╝ > (3 rows) > > hjp=> drop table a cascade; > NOTICE: drop cascades to constraint b_a_fkey on table b > DROP TABLE > > hjp=> \d b > Table "public.b" > ╔╤═╤═══╤══╤═══╗ > ║ Column │ Type │ Collation │ Nullable │Default║ > ╟┼─┼───┼──┼───╢ > ║ id │ integer │ │ not null │ nextval('b_id_seq'::regclass) ║ > ║ a │ integer │ │ │ ║ > ║ t │ text│ │ │ ║ > ╚╧═╧═══╧══╧═══╝ > Indexes: > "b_pkey" PRIMARY KEY, btree (id) > > As you can see, the table is still there, but the foreign key constraint > is gone. > > hjp=> select * from b; > ╔╤═══╤══╗ > ║ id │ a │ t ║ > ╟┼───┼──╢ > ║ 1 │ 1 │ foo1 ║ > ║ 2 │ 1 │ foo2 ║ > ║ 3 │ 2 │ bar1 ║ > ╚╧═══╧══╝ > (3 rows) > > And the data in the table is also unchanged. > > hp > > -- >_ | Peter J. Holzer| Story must make more sense than reality. > |_|_) || > | | | h...@hjp.at |-- Charles Stross, "Creative writing > __/ | http://www.hjp.at/ | challenge!" >