I haven't seen behavior like that.  I have seen my OSDs use a lot of RAM
while they're doing a recovery, but it goes back down when they're done.

Your OSD is doing something, it's using 126% CPU. What does `ceph osd tree`
and `ceph health detail` say?


When you say you're installing Ceph on 10 severs, are you running a monitor
on all 10 servers?




On Wed, Jun 18, 2014 at 4:18 AM, wsnote <[email protected]> wrote:

> If I install ceph in 10 servers with one disk each servers, the problem
> remains.
> This is the memory usage of ceph-osd.
> ceph-osd VIRT:10.2G, RES: 4.2G
> The usage of ceph-osd is too big!
>
>
> At 2014-06-18 16:51:02,wsnote <[email protected]> wrote:
>
> Hi, Lewis!
> I come up with a question and don't know how to solve, so I ask you for
> help.
> I can succeed to install ceph in a cluster with 3 or 4 servers but fail to
> do it with 10 servers.
> I install it and start it, then there would be a server whose memory rises
> up to 100% and this server crash.I have to restart it.
> All the config are the same.I don't know what's the problem.
> Can you give some suggestion?
> Thanks!
>
> ceph.conf:
> [global]
>         auth supported = none
>
>         ;auth_service_required = cephx
>         ;auth_client_required = cephx
>         ;auth_cluster_required = cephx
>         filestore_xattr_use_omap = true
>
>         max open files = 131072
>         log file = /var/log/ceph/$name.log
>         pid file = /var/run/ceph/$name.pid
>         keyring = /etc/ceph/keyring.admin
>
>         ;mon_clock_drift_allowed = 1 ;clock skew detected
>
> [mon]
>         mon data = /data/mon$id
>         keyring = /etc/ceph/keyring.$name
>  [mds]
>         mds data = /data/mds$id
>         keyring = /etc/ceph/keyring.$name
> [osd]
>         osd data = /data/osd$id
>         osd journal = /data/osd$id/journal
>         osd journal size = 1024
>         keyring = /etc/ceph/keyring.$name
>         osd mkfs type = xfs
>         osd mount options xfs = rw,noatime
>         osd mkfs options xfs = -f
>         filestore fiemap = false
>
> In every server, there is an mds, an mon, 11 osd with 4TB space each.
> mon address is public IP, and osd address has an public IP and an cluster
> IP.
>
> wsnote
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to