This. From the Btfrs Gotchas page:

Files with a lot of random writes can become heavily fragmented (10000+
> extents) causing thrashing on HDDs and excessive multi-second spikes of CPU
> load on systems with an SSD or large amount a RAM.
>
>    - On servers and workstations this affects databases and virtual
>    machine images.
>       - The nodatacow mount option
>       <https://btrfs.wiki.kernel.org/index.php/Mount_options> may be of
>       use here, with associated gotchas.
>    - On desktops this primarily affects application databases (including
>    Firefox and Chromium profiles, GNOME Zeitgeist, Ubuntu Desktop Couch,
>    Banshee, and Evolution's datastore.)
>       - Workarounds include manually defragmenting your home directory
>       using btrfs fi defragment. Auto-defragment (mount option autodefrag) 
> should
>       solve this problem in 3.0.
>    - Symptoms include btrfs-transacti and btrfs-endio-wri taking up a lot
>    of CPU time (in spikes, possibly triggered by syncs). You can use filefrag
>    to locate heavily fragmented files (may not work correctly with
>    compression).
>
>
On Wed, Jul 31, 2019 at 5:50 AM Mart van de Wege <mvdw...@gmail.com> wrote:

> Stefan Monnier <monn...@iro.umontreal.ca> writes:
>
> >> Is it safe to use autodefrag for my use case?
> >
> > It sounds like it might be "safe" (the text doesn't actually say it's
> > unsafe, but just that it has downsides).
> >
> > I do wonder why you'd want to do that, tho.  Fragmentation is typically
> > something that clueless Windows users worry about
>
> No. Fragmentation is an issue with all copy-on-write filesystems
> (including ZFS, which avoids periodic defrag by keeping an enormous
> amount of information in memory and doing defrag on the fly on that).
>
> Mart
>
> --
> "We will need a longer wall when the revolution comes."
> --- AJS, quoting an uncertain source.
>
>

Reply via email to