On Thu, Aug 22, 2024 at 9:59 AM o1bigtenor <o1bigte...@gmail.com> wrote:
> On Thu, Aug 22, 2024 at 8:03 AM Ron Johnson <ronljohnso...@gmail.com> > wrote: > >> On Thu, Aug 22, 2024 at 8:49 AM o1bigtenor <o1bigte...@gmail.com> wrote: >> >>> On Thu, Aug 22, 2024 at 6:24 AM Ron Johnson <ronljohnso...@gmail.com> >>> wrote: >>> >>>> That's great on small databases. Not so practical when they're big. >>>> >>>> So - - - - what is the recommended procedure for 'large' databases? >>> >>> (Might be useful to have a definition for what a large database is as >>> well.) >>> >> >> "Large" is when it takes too long to run *TWO* text mode pg_dump >> commands *in addition to* the pg_dump and pg_restore. >> >> > Hmmmmmmmmm - - - - I'd say that's about as neat a non-answer as I've ever > seen. > Eh? If you've got hundreds of hours of down time to pipe a text-mode pg_dump of a TB-sized database through md5sum. twice, then that database isn't too big. I don't have that much down time; thus, it's "too big". Can you try again? > > (You forgot the first question - - - maybe you could try that one too - - > - what is the recommended procedure > for 'large' databases?) > I already did, in my message three hours ago. -- Death to America, and butter sauce. Iraq lobster!