I made an EC 4+2 cluster with `crush-failure-domain=host`.

Later after adding more machines, I changed it from `host` to `datacenter`:

    ceph osd erasure-code-profile set my_ec_profile_datacenter k=4 m=2 
crush-failure-domain=datacenter crush-device-class=hdd
    ceph osd crush rule create-erasure rule_my_data_ec_datacenter 
my_ec_profile_datacenter
    ceph osd pool set my_data_ec crush_rule rule_my_data_ec_datacenter

This seems to have worked, and `ceph osd pool get my_data_ec crush_rule` 
outputs:

    crush_rule: rule_my_data_ec_datacenter

But `ceph osd pool ls detail` still shows

    pool 3 'my_data_ec' erasure profile my_ec_profile ...

with `my_ec_profile` instead of `my_ec_profile_datacenter`.

Is this a problem?
Who wins, the profile or the crush-rule?

If it's not a problem, it is at least confusing; can I fix it somehow?

Thanks!
Niklas
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to