On 3/12/21 12:31 PM, Eneko Lacunza wrote:
Hi Adrian,
Hi!
El 12/3/21 a las 11:26, Adrian Sevcenco escribió:Hi! yesterday i bootstrapped (with cephadm) my first ceph installation and things looked somehow ok .. but today the osds are not yet ready and i have in dashboard this warnings:MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs PG_AVAILABILITY: Reduced data availability: 64 pgs inactivePG_DEGRADED: Degraded data redundancy: 2/14 objects degraded (14.286%), 66 pgs undersizedTOO_FEW_OSDS: OSD count 2 < osd_pool_default_size 3This is the issue. You only have 2 OSDs, but the pool default size is 3.
it should not as i changed the values: ceph osd pool ls detailpool 1 'NVME' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 1 pgp_num_target 128 autoscale_mode on last_change 69 lfor 0/0/54 flags hashpspool,selfmanaged_snaps stripe_width 0 pg_num_min 64 application cephfs,rbd pool 2 'device_health_metrics' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 1 pgp_num_target 2 autoscale_mode on last_change 76 lfor 0/0/60 flags hashpspool stripe_width 0 pg_num_min 2 application mgr_devicehealth pool 3 'cephfs.sev-ceph.meta' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 77 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs pool 4 'cephfs.sev-ceph.data' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 79 flags hashpspool stripe_width 0 application cephfs
I don't think you should use Ceph for this config. The bare minimum you should use is 3 nodes, because default failure domain is host.and in logs: 3/12/21 12:18:19 PM [INF] OSD <1> is not empty yet. Waiting a bit more 3/12/21 12:18:19 PM [INF] OSD <0> is not empty yet. Waiting a bit more 3/12/21 12:18:19 PM [INF] Can't even stop one OSD. Cluster is probably busy. Retrying later.. 3/12/21 12:18:19 PM [ERR]cmd: osd ok-to-stop failed with: 31 PGs are already too degraded, would become too degraded or might become unavailable. (errno:-16)this is a single node, whole package ceph install with 2 local nvme drives as osds (to be used 2x replicated like a raid1 array)So, can anyone tell me what is going on?
ooooh ... how can i change this to device?
so, this is my first encounter with ceph so i just want to have a single node installation of ceph, so i could get familiar with both server administration and with client rbd and mds usageMaybe you can explain what your goal is, so people can recommend setups.
Thank you! Adrian
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
