y need is more physmem.  Don’t
>> provision any swap at all.  This isn’t 1985.
> 
> Well. Why keep lot of boot-time-only code sitting in RAM instead
> of having it gradually paged out?
> 
> https://chrisdown.name/2018/01/02/in-defence-of-swap.html

Note that this is from 2018.  I'm skeptical that the amount of boot-time-only 
usage is nontrivial.

Swap also confounds expanding filesystems on the boot device, which many people 
under provision and/or over partition.

>"swap should be double your physical memory size 
><https://superuser.com/a/111510/98210>"

That smells like an idea from the days of BSD-style swap, where swap backed 
rather than extended physmem.


> _Some_ swap space is a nice bonus. Of course, when it gets used
> for page-ins as well as page-outs during regular use, more RAM
> is needed instead.

Exactly.


> 
>>> Then I created a partition covering the rest of the free space on
>> each NVMe
>>> and used both of them as physical volumes for a single LVM volume
>> group:
>> 
>> Sharing the boot volume with data is not an ideal strategy.  I have
>> a customer who got themselves into an outage doing that.  You have a
>> zillion SAS/SATA slots empty, put a pair of SSDs into each system for
>> boot/OS, mirror them with MD, and don’t use them for data.
> 
> For my workload, the bottleneck (if any) are the HDDs, so NVMes have lots
> of idle time, even with some WAL and OMAP traffic hitting them.

Note that instate %util has little meaning on any OSD.  And there's more to the 
equation than "idle time".


> On one of my clusters, the nodes are used both as Ceph OSDs _and_
> as KVM hosts running many instances (with volumes on Ceph RBD
> on the very same cluster). No problem with that.

Converged architecture, Proxmox-style.  Nothing wrong with that if there are 
sufficient resources and osd_memory_target is managed appropriately.

> Even small servers these days have insane amounts of CPU power
> (especially relative to what is needed for HDD-based RGW traffic),
> so why keep them idle and consuming electricity most of the time?

That's a common assertion, and in some cases it's quite valid.  It does mean, 
however, that compute and storage have to scale together.

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to