Swap can possibly reduce your clusters performance, not? Osd-processes that 
swap data will result in supplementary and unwanted disk i/o 

I've got 10 OSD's per host and my memory consumption of ceph is typically 70GiB 
per host... Each host has about 40GiB available memory which is sufficient (for 
my setup) except one time I ran out of memory deleting old snapshots. But 8GiB 
wouldn't have helped...



> -----Oorspronkelijk bericht-----
> Van: Dmitrijs Demidovs <[email protected]>
> Verzonden: vrijdag 23 mei 2025 10:16
> Aan: [email protected]
> Onderwerp: [ceph-users] Re: SWAP usage 100% on OSD hosts after
> migration to Rocky Linux 9 (Ceph 16.2.15)
> 
> Hi Anthony.
> 
> Yes we have swap enabled. Old Rocky 8 and new Rocky 9 OSD hosts both
> configured with 8G of swap.
> 
> I will try to disable swap, but I guess what we will get a lot of Out Of 
> Memory
> messages on OSD hosts.
> 
> 
> 
> = old:
> [root@ceph-osd11 ~]# free -h
>                 total        used        free      shared buff/cache
> available
> Mem:           62Gi        30Gi       1.2Gi       2.1Gi 30Gi        29Gi
> Swap:         8.0Gi       2.8Gi       5.2Gi
> 
> = new:
> [root@ceph-osd17 ~]# free -h
>                  total        used        free      shared buff/cache
> available
> Mem:            62Gi        26Gi       1.0Gi       1.0Gi 36Gi        36Gi
> Swap:          8.0Gi       8.0Gi       7.0Mi
> 
> 
> 
> 
> 
> 
> OSD containers consumes reasonable amount of RAM (~2.6GB - ~3.6 GB):
> 
> 
> [root@ceph-osd17 ~]# docker stats --no-stream
> CONTAINER ID   NAME
>             CPU %     MEM USAGE / LIMIT     MEM %     NET I/O   BLOCK
> I/O         PIDS
> 5cc58e4a77b2   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-52
>             0.28%     3.576GiB / 62.28GiB   5.74%     0B / 0B   3.9TB /
> 975GB     62
> 3a60fecf648d   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-50
>             0.28%     2.912GiB / 62.28GiB   4.68%     0B / 0B   100TB /
> 45.7TB    62
> 9c20407e79eb   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-49
>             0.28%     2.905GiB / 62.28GiB   4.66%     0B / 0B   93TB /
> 35.8TB     62
> 9deadafef9dd   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-48
>             0.56%     3.624GiB / 62.28GiB   5.82%     0B / 0B   102TB /
> 39.2TB    62
> fcfe62a25fd9   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-55
>             0.40%     2.968GiB / 62.28GiB   4.77%     0B / 0B   83.2TB /
> 34.8TB   62
> 38d2d96cc491   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-51
>             1.42%     2.666GiB / 62.28GiB   4.28%     0B / 0B   105TB /
> 38.1TB    62
> e29c6bbc1ae7   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-54
>             2.01%     3.687GiB / 62.28GiB   5.92%     0B / 0B   106TB /
> 44.6TB    62
> 40346a7a45ea   ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-osd-53
>             0.69%     2.748GiB / 62.28GiB   4.41%     0B / 0B   103TB /
> 41.4TB    62
> 43c3e3a65531
> ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-crash-ceph-osd17
> 0.00%     3.73MiB / 62.28GiB    0.01%     0B / 0B   567MB / 18MB      2
> d9e436f9788c
> ceph-7e8bff5c-2761-11ec-9bb0-000c29ebc936-node-exporter-ceph-osd17
> 15.04%    30.25MiB / 62.28GiB   0.05%     0B / 0B   410MB / 14.6MB    61
> 
> 
> 
> 
> 
> But they also are biggest swap consumers:
> 
> [root@ceph-osd17 ~]# for file in /proc/*/status; do awk
> '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | sort -k 2
> -n -r | more
> ceph-osd 1553520 kB
> ceph-osd 1447728 kB
> ceph-osd 1218768 kB
> ceph-osd 1117536 kB
> ceph-osd 1026548 kB
> ceph-osd 641632 kB
> ceph-osd 495080 kB
> ceph-osd 424392 kB
> firewalld 26880 kB
> dockerd 20352 kB
> containerd 11136 kB
> docker 6144 kB
> docker 6144 kB
> docker 5952 kB
> docker 5952 kB
> docker 5952 kB
> docker 5952 kB
> docker 5952 kB
> docker 5760 kB
> (sd-pam) 5184 kB
> ceph-crash 4416 kB
> python3 4224 kB
> docker 4032 kB
> systemd-udevd 3264 kB
> 
> 
> 
> 
> 
> 
> On 22.05.2025 18:34, Anthony D'Atri wrote:
> >
> >>
> >> Problem:
> >>
> >> After migration to Rocky 9 (and new version of Docker) we see what our
> OSD hosts consumes 100% of SWAP space! It takes approximately one week
> to fill SWAP from 0% to 100%.
> >
> > Why do you have swap configured at all?  I suggest disabling swap in fstab
> and rebooting serially.
> >
> >
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to