On Tue, Jul 31, 2012 at 7:57 PM, viv...@gmail.com <viv...@gmail.com> wrote:
> Il 31/07/2012 21:27, Michał Górny ha scritto:
>> I'd be more afraid about resources, and whether the kernel will be
>> actually able to handle bazillion bind mounts. And if, whether it won't
>> actually cause more overhead than copying the whole system to some kind
>> of tmpfs.
>
> If testing show that bind mounts are too heavy we could resort to LD_PRELOAD
> a library that filter the acces to the disk,
> or to rework sandbox to also hide w/o errors some files,
> with an appropriate database (sys-apps/mlocate come to mind) every access
> will have negligible additional cost compared to that of rotational disks.
So, while I suspect that bind mount overhead won't actually be that
bad, I'm also thinking that extending the role of sandbox as has
already been suggested might be the simpler solution (and it works on
other kernels as well).  I'd still like a run-time solution some day,
but that would probably require SELinux and seems like a much more
ambitious project, and we'll probably get quite a bit of QA value out
of a sandbox solution.

I think the right solution is to not use external utilities unless
they can be linked in - at least not for anything running in sandbox.
We're talking about at VERY high volume of file opens most likely and
we can't be spawning processes every time that happens, let alone
running bash scripts or whatever.

So, here is my design concept (which had a little help from my LUG - PLUG):

1.  At the start of the build, portage generates a list of files that
are legitimate dependencies - anything in DEPEND or @system.  This can
be done by parsing the /var/pkg/db files (I assume portage has some
internal API for this already).

2.  Portage or a helper program (whatever is fastest) calls stat on
each file to obtain the device and inode IDs.  Maybe long-term we
might consider caching these (but I'm not sure how stable they are).

3.  The list of valid device/inode IDs are passed to sandbox somehow
(maybe in a file).  Sandbox creates a data structure in memory
containing them for rapid access (btree or such).

4.  When sandbox intercepts a file open request, it checks the file
inode against the list and allows/denies accordingly.

That said, after doing a quick pass at the sandbox source it seems
like it already is designed to restrict read access, but it uses
canonical filenames to do so. I'm not sure if those are going to be
reliable, especially if a filesystem contains bind mounts.  Since it
is already checking a read list if we thought that mechnism would be
robust and fast, we could just remove SANDBOX_READ="/" from
/etc/sandbox.d/00default and then load in whatever we want afterwards.
 I need to spend more time groking the current source.  I'd think that
using inode numbers as a key would be faster than determining a
canonical file name on every file access, but if sandbox is already
doing the latter then obviously it isn't that much overhead.

The other thing I'm not sure about here are symlinks.  If a symlink is
contained in a dependency, but the linked file is not, can that file
be used by a package?  I suppose the reverse is also a concern - if a
file is accessed through a symlink that isn't part of a dependency,
but the file it is pointing to is, is that a problem?  I'm wondering
if there is any eselect logic that could cause problems here.  When
calling stat we can choose whether to dereference symlinks.

Reply via email to