On 2016-02-19 20:28, Ahmed S. Darwish wrote:
On Fri, Feb 19, 2016 at 03:54:06PM +0100, David Henningsson wrote:
On 2016-02-16 22:41, Ahmed S. Darwish wrote:
Hi :-)
...
4. The code above is actually for the PA endpoint creating a
memfd mempool and registering it with the other side. That
is, pa_mempool_take_memfd_fd() is just used when registering
a memfd mempool over the pstream.
Ah, so that's only used on the sending side. Maybe I was just
confused when I wrote the above.
But then I have a follow-up question. In shm_attach, which is on the
receiving side, why do we need to save the fd in pa_shm at all? I
mean, we don't need to send it to anyone else (right?), so all we
need is the memory and mmap keeps that open for us. So, we could
just close the memfd in shm_attach?
Yup, this is what exactly happens. No fd is ever cached on the
receiving side at shm_attch(). Similar point was raised in patch #5
review and a more detailed reply was posted here [*] -- in the 2nd
reply hunk.
Given that this issue is now raised twice, I'm sure the code is
badly unclear, possibly due to the per-type stuff now removed.
Anyway here's how shm_attch() now looks like in v3:
static int shm_attach(pa_shm *m, pa_mem_type_t type, unsigned
id, int memfd_fd, ..) {
..
/* In case of attaching to memfd areas, _the caller_
* maintains ownership of the passed fd and has the sole
* responsibility of closing it down.. For other types, we
* are the code path which created the fd in the first
* place and we're thus the ones responsible for closing it
* down */
if (type != PA_MEM_TYPE_SHARED_MEMFD)
pa_assert_se(pa_close(fd) == 0);
m->type = type;
m->id = id;
m->size = (size_t) st.st_size;
m->do_unlink = false;
m->fd = -1;
}
I guess this is now much more clear :-)
Okay, so it's mostly a design decision then? Whether to take ownership
of the memfd and thus close it down when we're done (and simpler code as
a result?), or let the caller do the same right after the call to
shm_attach?
--
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic
_______________________________________________
pulseaudio-discuss mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss