Here are some more thoughts from me. This took me a long time to write, read, and rewrite. Consider this me brainstorming on pros and cons. I like apparmor and am happy to see a push to confine more applications, and thanks for offering a strategy for doing that.
> The number of policies in this package is very large. When no policy cache > exists (as on first > installation), building it can be very long. Even when a cache, loading all > policies is not > instantaneous. The upgrade took double the time, and there were no changes. I just repeated "dpkg -i" with the same package. I also noticed the package doesn't use dh_apparmor, so no dh_apparmor snippets are in the rendered postinst, maybe that is where some debhelper smarts are missing. I didn't investigate further. > - It allows the AppArmor team to review carefully profiles, maintain them, > ensure their coherency and > how they interact with each other. > - It allows to decouple profiles from the application maintainers, that don't > necessarily have the > necessary AppArmor knowledge. But I think we would want coupling. Without it, the profile can evolve in one direction, and the application in another, and the confinement will break. You could have an old version of the profile installed and a new version of the application package installed. Users could have pinned an application package to a specific version. Users could want a profile fix from bin:apparmor.d-N+1, but keep another profile shipped in bin:apparmor.d-N for another application because in N+1 it broke their use case. > - That also allows updating profiles without needing to update the application package. Conversely, you would be updating a package that ships 1500 profiles. You would be fixing a bug for one profile, and could be introducing a bug in another (bugs happen). I think at the core I have two objections to this whole approach: 1) all profiles loaded even when not needed, leading to the problems in comment #6. You explained several optimizations, but to me the best optimization is to not load what is not needed :) 2) decoupling with the application: high risk of the profile being meant for one version of the app, but a later one has different requirements that do not match the profile anymore. This discrepancy looks easier and quicker to catch if the profile is together with the application. Tee risk of updates to this single package approach also seems much higher. Now, you make a good point about package maintainers not necessarily having the apparmor knowledge, or even a desire to confine their application. Us suddenly injecting an apparmor profile into their package is rude and disruptive. And we would also have potentially up to 1500 new delta pieces added to debian packages. How can we crack this nut? Have you guys thought of ways to still ship all profiles in a separate binary package, but not load them unless they are needed? Unless the application they are meant to confine is installed? Can we play some tricks with triggers? I guess similar problems and discussions were had in the past about the kernel modules package (we have two binary packages for kernel modules IIRC), and linux-firmware (which also installs a whole bunch of binary blobs regardless if you have that hardware or not: you *could* have it in the future). But none of these are loaded by default: they are just files available on disk, in case they are needed. Some other thoughts: a) a promotion plan: what happens once a profile matures, and can be shipped with the application? What are the conditions? What packaging changes will be needed then? We will have to add careful breaks/replaces, following https://wiki.debian.org/PackageTransition to avoid conflicts like in comment #5 b) or is the plan to always ship the profile in the distro via bin:apparmor.d, to be available in case the application package is installed, and never ship it in the application package itself? Counting on the fact that the current installation times can be made faster and have it consume less memory? c) testing plan: how can the profiles from src:apparmor.d be tested? We would have to have an autopkgtest in src:apparmor.d for package bin:FOO that would install *both* bin:apparmor.d and bin:FOO, and from your comments looks like that fails due to OOM, going back to the optimization problem. d) what about more restricted systems like raspberry PIs, are they of scope for this package, at this stage? e) What will SRUs look like for src:apparmor.d? How many profiles would you be updating in one go? How many applications would have to be tested separately? f) What happens if I have a host spawning dozens of LXD containers, and all those containers install bin:apparmor.d? "Don't do it"? :) I also understand this is following an upstream project, which has all these profiles in a git repository/tarball, and having one source debian package mimicking that makes sense. But even with optimizations, unless they are really fantastic, I don't see right now what this will look like in the long term. Now, I'm not the final word on this. This just appeared on my radar for sponsorship reasons, and I have a passion for application confinement, having written some apparmor profiles in the recent past. I truly welcome others to join the discussion, and have no objections to be proven wrong. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2121409 Title: [FFE] add a new apparmor.d package containing several apparmor profiles To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2121409/+subscriptions -- ubuntu-bugs mailing list [email protected] https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
