On Fri, 7 Sep 2018 18:14:05 -0700 Lloyd Pique <[email protected]> wrote:
> To allow more fd's to be buffered, remove the use of the MAX_FDS_OUT > when deciding if there is room in fds_out. A full set of 1024 can now > be enqueued for output. > > The consequence is that the flush code must be able to handle sending > more than MAX_FDS_OUT safely. The logic has been changed so that if > there are more than MAX_FDS_OUT of them, that maximum count is sent > along with the data for a single message, rather than the data for all > the messages, as fd's cannot be sent without any data. As the receiver > will always assume it can read at least one message from the input > buffers, one messasge is sent -- it just might have unused fd's. > > To keep things sane, an explicit limit to the number of fd's per message > is introduced of three. An error will be generated if there are more. Hi Lloyd, you should explain in the commit message why you wrote this patch. Why do you need more fds buffered? I have forgot all about we might have discussed before, and other people might not have seen that to begin with, so the justification is important to document here. Quoting a real use case would go a long way. Why did you choose three as the limit per message? E.g. an image in DRM or EGL might need four buffers which could be four fds. I don't know of a protocol that would send more than one fd per message, but three feels maybe a bit low... maybe not. "To keep things sane" could be more explicit on what you mean. "Change the way" in the title is vague and obvious, it could be more explicit, like "allow buffering more fds before a flush". > > Signed-off-by: Lloyd Pique <[email protected]> > --- > src/connection.c | 93 +++++++++++++++++++------- > src/wayland-client.c | 12 ++-- > src/wayland-private.h | 3 +- > src/wayland-server.c | 2 +- > tests/connection-test.c | 141 +++++++++++++++++++++++++++++++++++----- > 5 files changed, 205 insertions(+), 46 deletions(-) > > diff --git a/src/connection.c b/src/connection.c > index c271fa0..bf77ef8 100644 > --- a/src/connection.c > +++ b/src/connection.c > @@ -166,6 +166,36 @@ wl_buffer_size(struct wl_buffer *b) > return b->head - b->tail; > } > > +static void > +wl_buffer_get_one_msg_iov(struct wl_buffer *b, struct iovec *iov, int *count) > +{ > + uint32_t p[2]; > + int msg_size; > + uint32_t head, tail; > + > + wl_buffer_copy(b, p, sizeof p); > + msg_size = p[1] >> 16; > + > + head = MASK(b->head); > + tail = MASK(b->tail); > + > + if (tail + msg_size <= sizeof b->data) { > + iov[0].iov_base = b->data + tail; > + iov[0].iov_len = msg_size; > + *count = 1; > + } else if (head == 0) { > + iov[0].iov_base = b->data + tail; > + iov[0].iov_len = sizeof b->data - tail; > + *count = 1; > + } else { > + iov[0].iov_base = b->data + tail; > + iov[0].iov_len = sizeof b->data - tail; > + iov[1].iov_base = b->data; > + iov[1].iov_len = msg_size - (sizeof b->data - tail); > + *count = 2; > + } Rather than duplicating all that code, how about adding a size argument to wl_buffer_get_iov()? > +} > + > struct wl_connection * > wl_connection_create(int fd) > { > @@ -325,7 +355,11 @@ wl_connection_flush(struct wl_connection *connection, > int soft) > > tail = connection->out.tail; > while (connection->out.head - connection->out.tail > 0) { > - wl_buffer_get_iov(&connection->out, iov, &count); > + if (wl_buffer_size(&connection->fds_out) > MAX_FDS_OUT * > sizeof(int32_t)) { > + wl_buffer_get_one_msg_iov(&connection->out, iov, > &count); > + } else { > + wl_buffer_get_iov(&connection->out, iov, &count); > + } > > build_cmsg(&connection->fds_out, cmsg, &clen); > > @@ -400,18 +434,18 @@ wl_connection_read(struct wl_connection *connection) > return wl_connection_pending_input(connection); > } > > -int > -wl_connection_write(struct wl_connection *connection, > - const void *data, size_t count) > +static int > +connection_buffer_write(struct wl_connection *connection, > + struct wl_buffer *buffer, const void *data, > + size_t count) > { > - if (connection->out.head - connection->out.tail + > - count > ARRAY_LENGTH(connection->out.data)) { > + if (buffer->head - buffer->tail + count > ARRAY_LENGTH(buffer->data)) { > connection->want_flush = 1; > - if (wl_connection_flush(connection) < 0) > + if (wl_connection_flush(connection, 0) < 0) > return -1; > } > > - if (wl_buffer_put(&connection->out, data, count) < 0) > + if (wl_buffer_put(buffer, data, count) < 0) > return -1; > > connection->want_flush = 1; > @@ -419,6 +453,14 @@ wl_connection_write(struct wl_connection *connection, > return 0; > } > > +int > +wl_connection_write(struct wl_connection *connection, > + const void *data, size_t count) > +{ > + return connection_buffer_write(connection, &connection->out, data, > + count); > +} > + > int > wl_connection_queue(struct wl_connection *connection, > const void *data, size_t count) > @@ -455,13 +497,8 @@ wl_connection_get_fd(struct wl_connection *connection) > static int > wl_connection_put_fd(struct wl_connection *connection, int32_t fd) > { > - if (wl_buffer_size(&connection->fds_out) == MAX_FDS_OUT * sizeof fd) { > - connection->want_flush = 1; > - if (wl_connection_flush(connection) < 0) > - return -1; > - } > - > - return wl_buffer_put(&connection->fds_out, &fd, sizeof fd); > + return connection_buffer_write(connection, &connection->fds_out, &fd, > + sizeof fd); > } > > const char * > @@ -489,9 +526,10 @@ get_next_argument(const char *signature, struct > argument_details *details) > } > > int > -arg_count_for_signature(const char *signature) > +arg_count_for_signature(const char *signature, int *out_fd_count) This change could be a separate patch, since all the existing call site changes are supposed to trivial and just clutter the more interesting changes. > { > int count = 0; > + int fd_count = 0; > for(; *signature; ++signature) { > switch(*signature) { > case 'i': > @@ -501,10 +539,14 @@ arg_count_for_signature(const char *signature) > case 'o': > case 'n': > case 'a': > + ++count; > + break; > case 'h': > ++count; > + ++fd_count; > } > } > + if (out_fd_count) *out_fd_count = fd_count; The assignment needs a new line. > return count; > } > > @@ -583,14 +625,19 @@ wl_closure_init(const struct wl_message *message, > uint32_t size, > int *num_arrays, union wl_argument *args) > { > struct wl_closure *closure; > - int count; > + int count, fd_count; > > - count = arg_count_for_signature(message->signature); > + count = arg_count_for_signature(message->signature, &fd_count); > if (count > WL_CLOSURE_MAX_ARGS) { > wl_log("too many args (%d)\n", count); > errno = EINVAL; > return NULL; > } > + if (fd_count > WL_CLOSURE_MAX_FD_ARGS) { > + wl_log("too many fd args (%d)\n", fd_count); > + errno = EINVAL; > + return NULL; > + } > > if (size) { > *num_arrays = wl_message_count_arrays(message); > diff --git a/src/wayland-private.h b/src/wayland-private.h > index ba183fc..9c8f3c7 100644 > --- a/src/wayland-private.h > +++ b/src/wayland-private.h > @@ -50,6 +50,7 @@ > #define WL_MAP_CLIENT_SIDE 1 > #define WL_SERVER_ID_START 0xff000000 > #define WL_CLOSURE_MAX_ARGS 20 > +#define WL_CLOSURE_MAX_FD_ARGS 3 Doesn't the value of WL_CLOSURE_MAX_FD_ARGS have some delicate dependencies on the wl_buffer size and message size? One can only send MAX_FDS_OUT at most in a single sendmsg(), and the minimum amount of data that must be sent in a call is the size of the message that just happens to be next in the buffer. Therefore, WL_CLOSURE_MAX_FD_ARGS must be smaller than or equal to wl_buffer size divided by message size. But message size has no size limit really, you could have a message of 3968 bytes for instance. The minimum message size for carrying WL_CLOSURE_MAX_FD_ARGS fds is header 8 bytes + 4 bytes/fd. For max 3 fds that makes 20 bytes. Let's say we have this in the wl_buffer: - a message with no fds, 3968 bytes - six messages with 3 fds each, each 20 bytes - 8 bytes free space The client tries to queue another message that is more than 8 bytes, triggering a forced internal flush. We can send 28 fds per message, so the 18 fds are no problem with these numbers. Let's write out the equations assuming the above scenario is the worst case, and figure out the limits for WL_CLOSURE_MAX_FD_ARGS. All messages need to fit into the wl_buffer: sizeof(wl_buffer) >= msg1 + n * (8 + 4 * WL_CLOSURE_MAX_FD_ARGS) msg1 is the size of the first message without any fds, and n is the number of fd-carrying messages, each carrying the maximum number of fds with minimum data. We need to have enough messages to be able to send all the fds: (n + 1) * MAX_FDS_OUT >= n * WL_CLOSURE_MAX_FD_ARGS The +1 is the first message without any fds. Rearranged: (n + 1)/n * MAX_FDS_OUT >= WL_CLOSURE_MAX_FD_ARGS Since (n + 1)/n > 1, this will be sufficient as well: MAX_FDS_OUT >= WL_CLOSURE_MAX_FD_ARGS Ok, that makes sense. Actually it's trivial: if we need to send at least a whole message at a time, then any message cannot have more fds than we can send at a time. D'oh. That also makes sure that fds are never sent after the message they belong to. Why three, then? Why not MIN(WL_CLOSURE_MAX_ARGS, MAX_FDS_OUT)? Or just use WL_CLOSURE_MAX_ARGS instead and document the relationship by adding a static assert for WL_CLOSURE_MAX_ARGS <= MAX_FDS_OUT. Then you don't even need to count fd args specifically anymore. > diff --git a/tests/connection-test.c b/tests/connection-test.c > index 4248f4a..4e5fc38 100644 > --- a/tests/connection-test.c > +++ b/tests/connection-test.c I will look at the tests after we have agreed again that this change really is necessary. I'm not too convinced at the moment. This change could cause regressions too. Clients that used to operate near the RLIMIT_NOFILE limit may now buffer more open fds at a time, causing them to exceed the limit and fail. I don't think that is too serious, because they were on the edge and lucky to begin with, but it is worth considering. Thanks, pq
pgpyDVr99_mHN.pgp
Description: OpenPGP digital signature
_______________________________________________ wayland-devel mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/wayland-devel
