On 2016-02-12 16:04, Ahmed S. Darwish wrote:
On Fri, Feb 12, 2016 at 11:57:00AM +0100, David Henningsson wrote:

On 2016-02-12 00:59, Ahmed S. Darwish wrote:
Hello!

Hi and thanks for working on it!

Having skimmed through the patches, I'm still not convinced w r t auto-send
mechanism of the memfds.

E g, consider we have clients 1 and 2, and client 1 plays back audio to a
sink. Client 2 monitors the sink-input of client 1 (like parec does if you
specify --monitor-stream). Could this cause the memfd used to communicate
with client 1 to also be shared with client 2? If so, that's a security
breach.

The mechanism seems fragile to such breaches but I'm not sure, maybe I'm
wrong and you can explain what would happen instead?


Thanks for the quick response and patch reviews ;-)

We've just discussed this over IRC, so I'll detail things here
for archival purposes. This might also be useful to add somewhere
in the code or in future changelogs..

Thanks for the explanations! This is a good summary.


- We now have 3 mempools in the system: a global mempool, and 2
   per-client mempools. One created by the client for passing
   playback audio. One created by the server for srbchannels.

- For any per-client memfd mempool, the file descriptors are
   instaneously closed after sending it to other side. The
   receiving end also instantaneously close all received file
   descriptors after doing an mmap().

   Thus we have no risks of data leaks in that case. The 'secret'
   shared by two PA endpoints is directly discarded after use.

- A special case is the global server-wide mempool. Its fd is
   kept open by the server, so whenever a new client connects, it
   passes it the fd for memfd-fd<->SHM-ID negotiation.

   Even in this case, communication is then done using IDs and
   no further FDs are passed. The receiving end also does not
   distinguish between per-client and global mempools and directly
   close the fd after doing an mmap().

- A question then arises: as was done with srbchannels, why not
   transform the global mempool to a per-client one?

   This is planned, but is to be done in a follow-up project. The
   global mempool is touched *everywhere* in the system -- in
   quite different manners and usage scenarioes. It's also used
   by a huge set of modules, including quite esoteric ones.

   Touching this will require extensive testing for each affected
   part. So this will be quite a HUGE patch series of it own,
   possibly done in 10 patches by 10 patches chunks.

Hmm. I'm thinking, to get the security without 100 patches first, we can start by not sharing the global mempool with the clients. That way, it will fallback to going over the socket. Which might mean an extra memcpy, even if that socket is an srbchannel. But still, it would be secure. Right?

Then, we can work on cleaning the global mempool up, by fixing modules one by one, the commonly used ones (such as the ALSA source modules) first. Indeed now we'll have to copy memory to each source_output->client mempool instead of to just one global mempool, so that will be a change.



   But when it's done, we'll have all the necessary infrastructure
   to directly secure it.

   For now, we can completely disable posix SHM and things should
   function as expected. This is a win.. yes, it's incomplete
   until we personalize the global mempool too, but it's still a
   step in the right direction. The memfd code path will also be
   heavily tested in the process.

Thanks,
Darwish


The all-improved memfd patch series ;-)

==> v1 cover letter:

Memfd is a simple memory sharing mechanism, added by the systemd/kdbus
developers, to share pages between processes in an anonymous, no
global registry needed, no mount-point required, relatively secure,
manner.

This patch series introduces memfd support to PulseAudio, laying out
the necessary (but not yet sufficient) groundwork for sandboxing,
protecting PulseAudio from its clients, and protecting clients from
each other.

Memfd support is added in quite a transparent manner, respecting
current PA mechanisms and abstractions. The lower-level layers are
properly refactored and extended: the srbchannel communication path
is transformed to memfds by only changing a single line of code.

==> v2 changes:

- All pools (srbchannel + client audio data mempool + global mempool)
   now use memfds by default. An empty /dev/shm FTW! ;-)

- Design issues pointed by David now fixed, leading to a much smaller
   code.. New memfd implementation now shares almost all of the
   existing posix SHM code paths.

- Heavy focus on not leaking any file descriptors throughout the
   code. Every fd, sent and received, has a clear owner responsible
   for closing it at appropriate times.

- If a mempool is memfd-backed, we now require registering it with
   the pstream before sending any blocks it covers. This is done to
   to minimize fd passing overhead and the possibility of fd leaks.

- A new, special, pstream packet type is introduced. That special
   packet can be sent anytime during the pstrem's lifetime and is
   used for creating on demand SHM ID to memfd mappings.

- All issues pointed by Tanu's very detailed review (vacuuming,
   memfd enablement over pstream, protocol changes, etc.) now fixed

- Testing! I've been heavily using these patches now for three
   weeks without visible issues.

==> references:

- v1 submission
   http://thread.gmane.org/gmane.comp.audio.pulseaudio.general/24110

- memfd_create(2), David Herrmann [memfd author]
   https://dvdhrm.wordpress.com/2014/06/10/memfd_create2/

- xdg-app
   https://wiki.gnome.org/Projects/SandboxedApps

==> diffstat:

Ahmed S. Darwish (12):
   pulsecore: Cache daemon shm size inside pa_core
   pulsecore: srbchannel: Introduce per-client SHM files
   pulsecore: Transform pa_mempool_new() into a factory method
   pulsecore: Split pa_shm mempool backend into pa_shm and pa_privatemem
   pulsecore: SHM: Introduce memfd support
   pulsecore: memexport/memimport: Support memfd blocks
   pulsecore, tests: Use new memexport/memimport API
   pulsecore: Specially mark global mempools
   pulsecore: pstream: Support memfd blocks transport
   srbchannel: Use memfd transport by default; pump protocol version
   pulse: client audio: Use memfd transport by default
   core: Use memfd transport by default

  PROTOCOL                         |  13 ++
  configure.ac                     |  21 +++-
  man/pulse-client.conf.5.xml.in   |   5 +
  man/pulse-daemon.conf.5.xml.in   |   5 +
  man/pulseaudio.1.xml.in          |  11 ++
  shell-completion/bash/pulseaudio |   4 +-
  shell-completion/zsh/_pulseaudio |   1 +
  src/Makefile.am                  |   7 ++
  src/daemon/cmdline.c             |  13 +-
  src/daemon/daemon-conf.c         |   1 +
  src/daemon/daemon-conf.h         |   1 +
  src/daemon/main.c                |   4 +-
  src/pulse/client-conf.c          |   1 +
  src/pulse/client-conf.h          |   2 +-
  src/pulse/context.c              |  48 +++++++-
  src/pulse/internal.h             |   2 +
  src/pulsecore/core.c             |  31 +++--
  src/pulsecore/core.h             |  13 +-
  src/pulsecore/mem.h              |  67 ++++++++++
  src/pulsecore/memblock.c         | 245 ++++++++++++++++++++++++++++++++-----
  src/pulsecore/memblock.h         |  18 ++-
  src/pulsecore/memfd-wrappers.h   |  68 +++++++++++
  src/pulsecore/privatemem.c       |  78 ++++++++++++
  src/pulsecore/privatemem.h       |  35 ++++++
  src/pulsecore/protocol-native.c  |  92 ++++++++++++--
  src/pulsecore/pstream.c          | 257 +++++++++++++++++++++++++++++++++++----
  src/pulsecore/pstream.h          |   2 +
  src/pulsecore/shm.c              | 247 +++++++++++++++++++------------------
  src/pulsecore/shm.h              |  33 ++++-
  src/tests/cpu-mix-test.c         |   2 +-
  src/tests/lfe-filter-test.c      |   2 +-
  src/tests/mcalign-test.c         |   2 +-
  src/tests/memblock-test.c        |  15 +--
  src/tests/memblockq-test.c       |   2 +-
  src/tests/mix-test.c             |   2 +-
  src/tests/remix-test.c           |   2 +-
  src/tests/resampler-test.c       |   2 +-
  src/tests/srbchannel-test.c      |   2 +-
  38 files changed, 1119 insertions(+), 237 deletions(-)
  create mode 100644 src/pulsecore/mem.h
  create mode 100644 src/pulsecore/memfd-wrappers.h
  create mode 100644 src/pulsecore/privatemem.c
  create mode 100644 src/pulsecore/privatemem.h

Regards,

--
Darwish
http://darwish.chasingpointers.com


--
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic


--
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic
_______________________________________________
pulseaudio-discuss mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss

Reply via email to