Quoting "O. Hartmann" <ohartm...@walstatt.org> (from Thu, 29 Aug 2019 14:26:38 +0200):

On Wed, 28 Aug 2019 13:57:00 +0200
Alexander Leidinger <alexan...@leidinger.net> wrote:

Quoting "O. Hartmann" <ohartm...@walstatt.org> (from Tue, 27 Aug 2019
10:11:54 +0200):

> We have a single ZFS pool (raidz), call it pool00 and this pool00 conatins a > ZFS dataset pool00/poudriere which we want to exclusively attach to a jail.
> pool00/poudriere contains a complete clone of a former, now decomissioned
> machine and is usable by the host bearing the jails. The jail, named
> poudriere,
> has these config parameters set in /etc/jail.conf as recommended:
>
>         enforce_statfs=         "0";

now set to
        enforce_statfs=         "1";

From a security point of view this is off course better, but "0" should have been OK for your problem case.

[...]

> Here I find the first confusing observation. I can't interact with
> the dataset
> and its content within the jail. I've set the "jailed" property of
> pool00/poudriere via "zfs set jailed=on pool00/poudriere" and I also have to
> attach the jailed dataset manually via "zfs jail poudriere
> pool00/poudriere" to
> the (running) jail. But within the jail, listing ZFS's mountpoints reveal:
>
> NAME                USED  AVAIL  REFER  MOUNTPOINT
> pool00             124G  8.62T  34.9K  /pool00
> pool00/poudriere   34.9K  8.62T  34.9K  /pool/poudriere
>
> but nothing below /pool/poudriere is visible to the jail. Being confused I

Since we use ezjail-admin for jail a rudimentary jail administration (just
creating and/or deleting the jail, maintenance is done manually), jails are
rooting at

pool00                          /pool00
pool00/ezjail/                  /pool/jails
pool00/ezjail/pulverfass        /pool/jails/pulverfass

"pulverfass" is the jail supposed to do the poudriere's job.

Since I got confused about the orientation of the "directory tree" - the root
is toplevel instead of downlevel - I corrected the ZFS dataset holding the
poudriere stuff that way:

pool00/ezjail/poudriere         /pool/poudriere

The jail "pulverfass" now is supposed to mount the dataset at

        /pool/jails/pulverfass/pool/poudriere


Please be more verbose what you mean by "interact" and "is visible".

Do zfs commands on the dataset work?

After I corrected my mistake by not respecting the mountpoint according to
statfs, with the changes explained above I'm able to mount /pool/poudriere
within the jail "pulverfass", but I still have problems with the way how I have
to mount this dataset. When zfs-mounted (zfs mount -a), I'm able to use the
dataset with poudriere as expected! But after rebooting the host and after all
jails has been restarted as well, I have to first make the dataset
/pool/poudriere available to the jail via the command "zfs jail pulverfass
ool00/ezjail/pulverfass" - which seems not to be done automatically by the
startup process - and then from within the jail "pulverfass" I can mount the
dataset as desribed above. This seems to be a big step ahead for me.

Great.

Note, I don't remember if you can manage the root of the jail, but at
least subsequent jails should be possible to manage. I don't have a
jail where the root is managed in the jail, just additional ones.
Those need to have set a mountpoint after the initial jailing and then
maybe even be mounted for the first time.

Please also check /etc/defaults/devfs.rules if the jail rule contains
an unhide entry for zfs.

Within /etc/jail.conf

        devfs_ruleset=          "4";

is configured as a common rulesset for all jails (in the common portion of
/etc/jail.conf).
There is no custom devfs.rules in /etc/, so /etc/defaults/devfs.rules should
apply and as far as I can see, there is an "unhide" applied to zfs:

[... /etc/defaults/devfs.rulse ...]

# Devices usually found in a jail.
#
[devfsrules_jail=4]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add path fuse unhide
add path zfs unhide
[...]

So, I guess everything is all right from this perspective, isn't it?

Yes. And as you was able to do the zfs mount, you even had the proof.

Is there a way to automatically provide the ZFS dataset of choice to the
propper jail or do i have to either

issue manually "zfs jail jailid/jailname pool/dataset" or put such a command as
script-command in the jail's definition portion as

        exec.prestart+= "zfs jail ${name} pool00/ezjail/poudriere";
?

I never tried that with plain jails. ezjail has the jail_xxx_zfs_datasets variable in the config directory. Unfortunately the attachment of the datasets to the jail is not early enough to be picked up by the start scripts (so probably with exec.poststart, if you use that you can also use it to launch "jexec ... service zfs start" inside the jail). iocage works much better in this regard.

Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.org    netch...@freebsd.org  : PGP 0x8F31830F9F2772BF

Attachment: pgpG1afTW9DIU.pgp
Description: Digitale PGP-Signatur

Reply via email to