Package: cryptsetup Version: 2:1.7.0-2 Severity: wishlist
Hi. I made some thoughts about how cryptsetup (especially the stuff like initramfs scripts and cryptdisks_start/stop) and also crypttab could possible improve to handle multi-devices. I think the background is quote obvious, we have container types below dm-crypt (typically fileystems) which may consist of multiple device, most prominently RAID e.g. from btrfs or ZFS. What one want's is typically the following: - when booting from such device, the initramfs should set up all the necessary dm-crypt devices - one wants a way to do e.g. cryptdisks_start/stop and have all the respective devices set up "at once" (i.e. that one doesn't need to manually start all these devices - it would probably make sense, to optionally *not* ask for a password for each of the respective devices - some parallelisation when setting up the mappings (because with high iter times it would take quite a while to set up e.g. a 10 devices RAID, when all their underlying dm-crypts are set up sequentially. Also there are open questions like: - Should we abort (and close all already open devices) when one of them fails to be set up (e.g. no passphrase)? Should we allow to specify a minimum number of devs that need to succeed? - Is it reasonable to do the parallelism on a general base (i.e. multi device independent), especially for the case of devices set up automatically during boot (except the root/resume)? Or is this kind of parallelism rather something that should be hanlded on other levels (e.g. systemd?)? Also one should consider what can be the case in terms of keys: The mutli devices belonging together may: - all use the same key (in terms of key slot keys) - all use different keys (in terms of key slot keys) (I'd guess they should, for security reasons, always have different master keys?) Further problems: - How to simplyfy passphrase/key/keyscript handling for the multi devices? - How to determine which devices belong together? So let's see what one could do: First, automatically determining which dm-crypt devices belong together for a multi-device seems only possible when these are already opened and when we support that for all block layers (LVM, MD, etc. pp.) that may be underneath a multi-devier container (e.g. a fs like btrfs). So that would work similarly to how we determine the crypt device for the root fs when setting up the initramfs. But it doesn't work general, and not at all if the block devices would be used raw (i.e. no btrfs or so on top, that somehow know what belongs together). My proposal would be therefore: 1) Add a new multidevice=name parameter to 4th field of crypttab. All entries of the same name, would form a group of dm-crypt devices which are necessary for some fs/etc. on top of them. This would need to be supported for the root/resume devices of initramfs. And also for cryptdisks_start/stop. For the later I'd suggest to designate the "name" as a virtual device name that cryptdisks_start/stop understands. Consider e.g. the following crypttab: data1 /dev/sda device=/dev/disk/by-label/keyUSBstick:pathname=/key.gpg luks,keyscript=decrypt_openpgp,mutlidevice=data data2 /dev/sdb device=/dev/disk/by-label/keyUSBstick:pathname=/key.gpg luks,keyscript=decrypt_openpgp,mutlidevice=data data3 /dev/sdc device=/dev/disk/by-label/keyUSBstick:pathname=/key.gpg luks,keyscript=decrypt_openpgp,mutlidevice=data data4 /dev/sdd device=/dev/disk/by-label/keyUSBstick:pathname=/key.gpg luks,keyscript=decrypt_openpgp,mutlidevice=data (Which could be a 4disk RAID6 btrfs, with a gpg encrypted key, found in key.gpg on /dev/disk/by-label/keyUSBstick.) $ cryptdisks_start data1 would still allow me to set up single crypt devices only. But when I say: $ cryptdisks_start data it would try to set up all four. If there is device name that matches a group name, than I'd say mutlidevice parameter should be completely ignored. How to handle passphrases/keys? Now, it would be annoying if I'd have to enter the same key/passphrase four times. Or perhaps scan my fingerprint (if someone would really trust that) four times. So there should be a way, that cryptdisks_start, respectively the initramfs and the boot system does that only once, if possible. What one could possibly do is: If the keyscript and the 3rd field are the same, then run it only once cache it somehow and reuse it for the subsequent devices. That gives a problem: What if all devices use actually *different* keys, e.g. read from a smartcard, and keyscript=smartcardscript and the 3rd field would be something like "usb-card-reader-1". It would be the same for all four, but I might actually wanted to have used 4 different cards. Failure. I see two ways for this: - either we require such keyscripts to accept a dummy option, to the 3rd field, so one could have them all different hwdevice=usb-card-reader-1:dummy=1 hwdevice=usb-card-reader-1:dummy=2 hwdevice=usb-card-reader-1:dummy=3 hwdevice=usb-card-reader-1:dummy=4 => all would be different, bon! - or we handle it generally, by having another 4th field option, named e.g. "key-group" or "passphrase-group". => those entries with the same key-group value, would look if a passphrase/key is already there for that,.. use it if so, and if not read it in. That would allow us to "share" that passphrase/key even beyond the multi-device. How to technically "cache" the passphrases/keys for reuse by the later devices? Well, the kernel keyring comes to my mind, i.e. keyutils. One could create user keys a la: cryptsetup_<key-group name> of course, only the user that entered this must be allowed to re-use it. Some worries I have with that: I do not want to have the key stored in the keyring more than necessary. keyutils allow to set timeouts (just add another 4th field option like keyttl), but that's IMHO only half the solution, especially with respect to initramfs root/resume fs. What I'd rather want there: no timeout, but clear the key either after setting up the last device has failed or succeeded. So perhaps simply: if keyttl, isn't set or 0, forcefully remove the key from the kernel keyring, when the last device from the currently set up multi-device group and with the same key-group has been tried (whether it failed or succeeded). And maybe, just to be sure, a local-bottom initramfs script, which clears all cached keys. With that, parallelising the crypt device set up of a multi-device target is also already close,... first: from the current mutli-device, get the key/passphrase for each of the key-groups (once). Then, concurrently luksOpen or whatever these devices with the cached key/passphrase. Maybe doing *all* concurrently is actually too much. So, you guessed it, another 4th field option that allows to specify the parallelism for each multidevice/keygroup combination. Shall we fail if not all devices can be set up...? That's a difficult question... On the one hand I'd say, it's the duty of these layers to determin whether enough is present (e.g. btrfs must decide respectively provide options that allow the user to control, whether a degraded raid should be mounted or not). However, it's probably unrealistic, that alle the possible upper layers actually provide that, especially when the crypt device would be used raw. So maybe, allow another 4tf field option like, "maxmissingdevs" (but some better name should be used) which tells how many devices may fail to be set up, before initramfs scripts respectively cryptdisks_start would rather bail out with an error, than trying to continue. 0 = all need to be suceessfully set up 1,2,3 = so many are allowed to miss, so for RAID5 it would be 1, for RAID6 it would be 2, etc. -1 = don't care at all whether devices failed or not. Well... I'm curious to hear your opinions. Cheers, Chris.