I set up a single-node, dual-osd cluster following the Quick Start on
ceph.com with Firefly packages, adding "osd pool default size = 2".
All of the pgs came up in active+remapped or active+degraded status. I
read up on tunables and set them to optimal, to no result, so I added
a third osd instead. About 39 pgs moved to active status, but the rest
stayed in active+remapped or active+degraded. When I raised the
replication level to 3 with "ceph osd pool set ... size 3", all the
pgs went back to degraded or remapped. Just for kicks, I tried to set
the replication level to 1, and I still only got 39 pgs active. Is
there something obvious I'm doing wrong?

m.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to