On 3/01/21 12:24 am, Andrei POPESCU wrote:
On Sb, 02 ian 21, 01:40:14, David Christensen wrote:
On Linux (including Debian), MD (multiple disk) and LVM (logical volume
manager) are the obvious choices for software RAID. Each have their
respective learning curves, but they're not too high.
An interesting article I stumbled upon:
http://www.unixsheikh.com/articles/battle-testing-data-integrity-verification-with-zfs-btrfs-and-mdadm-dm-integrity.html
Hmm. It only talks about software raid in the context of RAID-5. They
acknowledge that RAID-5 is 'frowned upon', but don't go into why, and
say they think it's great. My take: once you've lost one disk, you have
the same reliability as a RAID-0 (stripe) set of what's left - much less
reliable than no RAID at all.
I generally stick with RAID-1, but would consider RAID-10.
My take:
If you care about your data you should be using ZFS or btrfs.
Licensing issues and the resulting complications stop me using ZFS, and
last I heard btrfs wasn't regarded as being as reliable as ext3/4 or xfs
(I generally use xfs).
I may be out of date, and I've heard bad comments about xfs too ...
In case of data corruption (system crash, power outage, user error, or
even just a HDD "hiccup") plain md without the dm-integrity layer won't
even be able to tell which is the good data and will overwrite your good
data with bad data. Silently.
I guess I need to investigate that. Any further references? I've had
crashes and power outages and never noticed any problems, not that that
means they won't happen (or even that they haven't happened). Does a
journalling filesystem on top not cover that?
Cheers,
Richard