Hello Frank,
Thank you. I ran the next command: ceph pg 32.15c list_unfound
I located the object but I don't know how solve this problem.
{
"num_missing": 1,
"num_unfound": 1,
"objects": [
{
"oid": {
"oid": "rbd_data.aedf52e8a44410.000000000000021f",
"key": "",
"snapid": -2,
"hash": 358991196,
"max": 0,
"pool": 32,
"namespace": ""
},
"need": "49128'125646582",
"have": "0'0",
"flags": "none",
"clean_regions": "clean_offsets: [], clean_omap: 0, new_object: 1",
"locations": []
}
],
"more": false
Thank you.
________________________________
De: Frank Schilder <[email protected]>
Enviado: lunes, 26 de junio de 2023 11:43
Para: Jorge JP <[email protected]>; Stefan Kooman <[email protected]>;
[email protected] <[email protected]>
Asunto: Re: [ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg
inconsistent
I don't think pg repair will work. It looks like a 2(1) replicated pool where
both OSDs seem to have accepted writes while the other was down and now the PG
can't decide what is the true latest version.
Using size 2 min-size 1 comes with manual labor. As far as I can tell, you will
need to figure out what files/objects are affected and either update the
missing copy or delete the object manually.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Jorge JP <[email protected]>
Sent: Monday, June 26, 2023 11:34 AM
To: Stefan Kooman; [email protected]
Subject: [ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg
inconsistent
Hello Stefan,
I run this command yesterday but the status not changed. Other pgs with status
"inconsistent" was repaired after a day, but in this case, not works.
instructing pg 32.15c on osd.49 to repair
Normally, the pg will changed to repair but not.
________________________________
De: Stefan Kooman <[email protected]>
Enviado: lunes, 26 de junio de 2023 11:27
Para: Jorge JP <[email protected]>; [email protected] <[email protected]>
Asunto: Re: [ceph-users] Possible data damage: 1 pg recovery_unfound, 1 pg
inconsistent
On 6/26/23 08:38, Jorge JP wrote:
> Hello,
>
> After deep-scrub my cluster shown this error:
>
> HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data
> damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy:
> 2/77158878 objects degraded (0.000%), 1 pg degraded
> [WRN] OBJECT_UNFOUND: 1/38578006 objects unfound (0.000%)
> pg 32.15c has 1 unfound objects
> [ERR] OSD_SCRUB_ERRORS: 1 scrub errors
> [ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound, 1 pg
> inconsistent
> pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting
> [49,47], 1 unfound
> [WRN] PG_DEGRADED: Degraded data redundancy: 2/77158878 objects degraded
> (0.000%), 1 pg degraded
> pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting
> [49,47], 1 unfound
>
>
> I searching in internet how it solves, but I'm confusing..
>
> Anyone can help me?
Does "ceph pg repair 32.15c" work for you?
Gr. Stefan
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]