On Sat, 7 Jun 2014, Anil Dhingra wrote:
> HI Guys
>
> Finally writing ..after loosing my patience to configure my cluster multiple
> times but still not able to achieve active+clean .. looks like its almost
> impossible to configure this on centos 6.5.
>
> As I have to prepare a POC ceph+cinder but with this config difficult to
> convince someone. Also wher ethe is no udev rules for centos 6.5 which I
Were these the packages from ceph.com?
> copied from git also why ceph-deploy dosent create required directories on
> ceph-nodes ..like /var/lib/ceph/osd , /var/lib/ceph/bootstrap-osd for person
> who is configuring it for first time almost get mad what went wrong
>
> Q1 - why it start creating pages after creating a cluster even there is no
> OSD added to cluster ,, without osd where it tries to write below out bfr
> adding osd to cluster
>
> [root@ceph-node1 my-cluster]# ceph-deploy mon create-initial
> [root@ceph-node1 my-cluster]# ceph -s
> cluster fbf07780-f7bf-4d92-a144-a931ef5cd4a9
> health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
> osds
> monmap e1: 1 mons at {ceph-node1=192.168.10.41:6789/0}, election epoch
> 2, quorum 0 ceph-node1
> osdmap e1: 0 osds: 0 up, 0 in
> pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
> 0 kB used, 0 kB / 0 kB avail
> 192 creating
>
>
> After adding 1st OSD
>
> [root@ceph-node1 my-cluster]# ceph-deploy osd --zap-disk create
> ceph-node2:sdb
> [root@ceph-node1 my-cluster]# ceph -w
> cluster fbf07780-f7bf-4d92-a144-a931ef5cd4a9
> health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
> monmap e1: 1 mons at {ceph-node1=192.168.10.41:6789/0}, election epoch
> 2, quorum 0 ceph-node1
> osdmap e5: 1 osds: 1 up, 1 in
> pgmap v7: 192 pgs, 3 pools, 0 bytes data, 0 objects
> 35116 kB used, 5074 MB / 5108 MB avail
> 192 active+degraded
>
> After 2nd OSD
>
> [root@ceph-node1 my-cluster]# ceph-deploy osd --zap-disk create
> ceph-node3:sdb
> [root@ceph-node1 my-cluster]# ceph -w
> cluster fbf07780-f7bf-4d92-a144-a931ef5cd4a9
> health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
> monmap e1: 1 mons at {ceph-node1=192.168.10.41:6789/0}, election epoch
> 2, quorum 0 ceph-node1
> osdmap e8: 2 osds: 2 up, 2 in
> pgmap v13: 192 pgs, 3 pools, 0 bytes data, 0 objects
> 68828 kB used, 10150 MB / 10217 MB avail
> 192 active+degraded
This is perfectly normal for firefly because the default replication is
now 3x and you only have 2 OSDs in your cluster. If you add a third you
should see active+clean.
If you were following an install guide, please let us know which one so we
can get it corrected.
Thanks!
sage
> 2014-06-06 23:29:47.358646 mon.0 [INF] pgmap v13: 192 pgs: 192
> active+degraded; 0 bytes data, 68828 kB used, 10150 MB / 10217 MB avail
> 2014-06-06 23:31:46.711047 mon.0 [INF] pgmap v14: 192 pgs: 192
> active+degraded; 0 bytes data, 68796 kB used, 10150 MB / 10217 MB avail
>
> [root@ceph-node1 my-cluster]# cat /etc/ceph/ceph.conf
> [global]
> osd_pool_default_pgp_num = 100
> auth_service_required = cephx
> osd_pool_default_size = 2
> filestore_xattr_use_omap = true
> auth_client_required = cephx
> osd_pool_default_pg_num = 100
> auth_cluster_required = cephx
> mon_host = 192.168.10.41
> public_network = 192.168.10.0/24
> mon_clock_drift_allowed = .3
> mon_initial_members = ceph-node1
> cluster_network = 192.168.10.0/24
> fsid = fbf07780-f7bf-4d92-a144-a931ef5cd4a9
>
> [root@ceph-node1 my-cluster]# ceph osd dump
> epoch 8
> fsid fbf07780-f7bf-4d92-a144-a931ef5cd4a9
> created 2014-06-06 23:21:47.665510
> modified 2014-06-06 23:29:41.411379
> flags
> pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool
> crash_replay_interval 45 stripe_width 0
> pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool
> stripe_width 0
> pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool
> stripe_width 0
> max_osd 2
> osd.0 up in weight 1 up_from 4 up_thru 4 down_at 0 last_clean_interval
> [0,0) 192.168.10.42:6800/5848 192.168.10.42:6801/5848
> 192.168.10.42:6802/5848 192.168.10.42:6803/5848 exists,up
> 0f55a826-fa5b-44b2-b2f8-7b83d15526bf
> osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval
> [0,0) 192.168.10.43:6800/7758 192.168.10.43:6801/7758
> 192.168.10.43:6802/7758 192.168.10.43:6803/7758 exists,up
> 5c701240-51a2-407a-b32a-9830935c1567
>
> [root@ceph-node1 my-cluster]# ceph osd tree
> # id weight type name up/down reweight
> -1 0 root default
> -2 0 host ceph-node2
> 0 0 osd.0 up 1
> -3 0 host ceph-node3
> 1 0 osd.1 up 1
>
> Thanks
> Anil
>
> _______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com