On 11/7/22 09:43, Tom Lane wrote:
Ron writes:
On 11/7/22 08:02, Tom Lane wrote:
call. It'd still be recommendable to pg_dumpall and restore into
a freshly-initdb'd cluster, because otherwise you can't be real
sure that you identified and cleared all the data corruption.
Why *just* pg_dumpall
Ron writes:
> On 11/7/22 08:02, Tom Lane wrote:
>> call. It'd still be recommendable to pg_dumpall and restore into
>> a freshly-initdb'd cluster, because otherwise you can't be real
>> sure that you identified and cleared all the data corruption.
> Why *just* pg_dumpall instead of "pg_dumpall --
On 11/7/22 08:02, Tom Lane wrote:
[snip]
call. It'd still be recommendable to pg_dumpall and restore into
a freshly-initdb'd cluster, because otherwise you can't be real
sure that you identified and cleared all the data corruption.
Why *just* pg_dumpall instead of "pg_dumpall --globals-only" an
On Mon, Nov 07, 2022 at 09:02:26AM -0500, Tom Lane wrote:
> Stefan Froehlich writes:
> > On Mon, Nov 07, 2022 at 08:17:10AM -0500, Mladen Gogala wrote:
> >> On 11/7/22 06:19, Laurenz Albe wrote:
> >>> Don't continue to work with that cluster even if everything seems OK now.
> >>> "pg_dumpall" and
Stefan Froehlich writes:
> On Mon, Nov 07, 2022 at 08:17:10AM -0500, Mladen Gogala wrote:
>> On 11/7/22 06:19, Laurenz Albe wrote:
>>> Don't continue to work with that cluster even if everything seems OK now.
>>> "pg_dumpall" and restore to a new cluster on good hardware.
>> Why would that be ne
On Mon, Nov 07, 2022 at 08:17:10AM -0500, Mladen Gogala wrote:
> On 11/7/22 06:19, Laurenz Albe wrote:
> >Don't continue to work with that cluster even if everything seems OK now.
> >"pg_dumpall" and restore to a new cluster on good hardware.
> Why would that be necessary if the original machine
On 11/7/22 06:19, Laurenz Albe wrote:
Don't continue to work with that cluster even if everything seems OK now.
"pg_dumpall" and restore to a new cluster on good hardware.
Why would that be necessary if the original machine works well now?
--
Mladen Gogala
Database Consultant
Tel: (347) 321-12
On Mon, 2022-11-07 at 11:17 +0100, Stefan Froehlich wrote:
> On Sun, Nov 06, 2022 at 09:48:32AM -0500, Tom Lane wrote:
> > Stefan Froehlich writes:
> > > > # create extension amcheck;
> > > > # select oid, relname from pg_class where relname
> > > > ='faultytablename_pkey';
> > > > [returns oid 5
On Sun, Nov 06, 2022 at 09:48:32AM -0500, Tom Lane wrote:
> Stefan Froehlich writes:
> > | # create extension amcheck;
> > | # select oid, relname from pg_class where relname ='faultytablename_pkey';
> > | [returns oid 537203]
> > | # select bt_index_check(537203, true);
> > | server closed the co
Stefan Froehlich writes:
> I am using v13, but well:
> | # create extension amcheck;
> | # select oid, relname from pg_class where relname ='faultytablename_pkey';
> | [returns oid 537203]
> | # select bt_index_check(537203, true);
> | server closed the connection unexpectedly
Oh ... up through
On Sun, Nov 06, 2022 at 09:13:08AM -0500, Tom Lane wrote:
> > | 2022-11-06 11:52:36.367 CET [2098-35] LOG: server process (PID 2964738)
> > was terminated by signal 11: Segmentation fault
> contrib/amcheck might help to identify the faulty data (at this
> point there's reason to fear multiple c
Stefan Froehlich writes:
> I followed the suggestion to trace down the faulty record, found and
> fixed it. Now I can access that record again, but if I try to dump
> the table I get:
> | 2022-11-06 11:52:36.367 CET [2098-35] LOG: server process (PID 2964738)
> was terminated by signal 11: Segme
12 matches
Mail list logo