On Mon, Jun 3, 2019 at 9:02 AM Hervé Ballans <[email protected]>
wrote:

> Hi all,
>
> For information, I updated my Luminous cluster to the latest version
> 12.2.12 two weeks ago and, since then, I no longer encounter any problems
> of inconsistent pgs :)
>

You probably were affected by https://tracker.ceph.com/issues/22464

tl;dr: new kernel + low on RAM = read errors, for some reason. Fix was to
retry reads ;)


>
> Regards,
> rv
>
> Le 03/05/2019 à 11:54, Hervé Ballans a écrit :
>
> Le 24/04/2019 à 10:06, Janne Johansson a écrit :
>
> Den ons 24 apr. 2019 kl 08:46 skrev Zhenshi Zhou <[email protected]>:
>
>> Hi,
>>
>> I'm running a cluster for a period of time. I find the cluster usually
>> run into unhealthy state recently.
>>
>> With 'ceph health detail', one or two pg are inconsistent. What's
>> more, pg in wrong state each day are not placed on the same disk,
>> so that I don't think it's a disk problem.
>>
>> The cluster is using version 12.2.5. Any idea about this strange issue?
>>
>>
> There was lots of fixes for releases around that version,
> do read https://ceph.com/releases/12-2-7-luminous-released/
> and later release notes on the 12.2.x series.
>
> Hi,
>
> I encounter exactly the same problem on my Ceph Luminous cluster while I
> am in 12.2.10 version ! (and this already was the case with previous
> Luminous releases)
>
> And unfortunately, I don't see any mention of that issue in the changelog
> of 12.2.12 :(
>
> Has anyone ever looked into this issue ?
>
> Regards,
> rv
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to