On Tue, Sep 15, 2020 at 07:58:39PM +0200, Magnus Hagander wrote:
> Try reading them "row by row" until it breaks. That is, SELECT * FROM ...
> LIMIT
> 1, then LIMIT 2 etc. For more efficiency use a binary search starting at what
> seems like a reasonable place looking at the size of the table vs t
Vasu Madhineni writes:
> Hi Magnus,
>
> Thanks for your update.
> To identify the number of tables corrupted in the database if I run
> below command, Will any impact on other tables in the production
> environment.
>
> "pg_dump -f /dev/null database"
Consider using pg_dump or any other means t
Hi Magnus,
Thanks for your update.
To identify the number of tables corrupted in the database if I run below
command, Will any impact on other tables in the production environment.
"pg_dump -f /dev/null database"
Thanks in advance.
Regards,
Vasu Madhineni
On Fri, Sep 18, 2020 at 3:42 PM Magnus
That depends on what the problem is and how they fix it. Most likely yes --
especially since if you haven't enabled data checksums you won't *know* if
things are OK or not. So I'd definitely recommend it even if things *look*
OK.
//Magnus
On Wed, Sep 16, 2020 at 5:06 AM Vasu Madhineni
wrote:
>
I could see block read I/O errors in /var/log/syslog. if those error fixed
by OS team, will it require recovery.
Also can i use LIMIT and OFFSET to locate corrupted rows?
Thanks in advance
Regards,
Vasu Madhineni
On Wed, Sep 16, 2020, 01:58 Magnus Hagander wrote:
> Try reading them "row by ro
Try reading them "row by row" until it breaks. That is, SELECT * FROM ...
LIMIT 1, then LIMIT 2 etc. For more efficiency use a binary search starting
at what seems like a reasonable place looking at the size of the table vs
the first failed block to make it faster, but the principle is the same.
On
Is it possible to identify which rows are corrupted in particular tables.
On Tue, Sep 15, 2020 at 5:36 PM Magnus Hagander wrote:
>
>
> On Tue, Sep 15, 2020 at 11:15 AM Vasu Madhineni
> wrote:
>
>> Hi All,
>>
>> In one of my postgres databases multiple tables got corrupted and
>> followed the be
On Tue, Sep 15, 2020 at 11:15 AM Vasu Madhineni
wrote:
> Hi All,
>
> In one of my postgres databases multiple tables got corrupted and followed
> the below steps but still the same error.
>
> 1.SET zero_damaged_pages = on
> 2. VACUUM ANALYZE, VACUUM FULL
> but still same error.
>
That is a very