We have terrible IO performance when multiple VMs do some file IO. Mainly do some java compilation on that servers. If we have 2 parallel jobs everything is fine, but having 10 jobs we see the warning "HEALTH_WARN X requests are blocked > 32 sec; Y osds have slow requests". I have two enterprise SSDs which gained good results in ceph tested with fio. They are too small to have a separate "ssd_vms" pool and advertise it in openstack as a separate storage backend
-- ----------------------------------------------------------------- Robert Eikermann M.Sc.RWTH | Software Engineering Lehrstuhl für Software Engineering | RWTH Aachen University Ahornstr. 55, 52074 Aachen, Germany | http://www.se-rwth.de<http://www.se-rwth.de/> Phone ++49 241 80-21306 / Fax -22218 | Von: Wido den Hollander [mailto:[email protected]] Gesendet: Montag, 16. September 2019 11:52 An: Eikermann, Robert <[email protected]>; [email protected] Betreff: Re: [ceph-users] Activate Cache Tier on Running Pools On 9/16/19 11:36 AM, Eikermann, Robert wrote: Hi, I'm using Ceph in combination with Openstack. For the "VMs" Pool I'd like to enable writeback caching tier, like described here: https://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/ . Can you explain why? The cache tiering has some serious flaws and can even decrease performance instead of improve it. What are you trying to solve? Wido Should it be possible to do that on a running pool? I tried to do so and immediately all VMs (Linux Ubuntu OS) running on Ceph disks got readonly filesystems. No errors were shown in ceph (but also no traffic arrived after enabling the cache tier). Removing the cache tier , rebooting the VMs and doing a filesystemcheck repaired everything. Best Robert _______________________________________________ ceph-users mailing list -- [email protected]<mailto:[email protected]> To unsubscribe send an email to [email protected]<mailto:[email protected]>
_______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
