On Thu, Feb 21, 2019 at 7:30 AM Vasily Averin <v...@virtuozzo.com> wrote:
>
> There was few incidents when XFS over network block device generates
> IO requests with slab-based metadata. If these requests are processed
> via sendpage path tcp_sendpage() calls skb_can_coalesce() and merges
> neighbour slab objects into one skb fragment.
>
> If receiving side is located on the same host tcp_recvmsg() can trigger
> following BUG_ON
> usercopy: kernel memory exposure attempt detected
>                 from XXXXXX (kmalloc-512) (1024 bytes)
>
> This patch helps to detect the reason of similar incidents on sending side.
>
> Signed-off-by: Vasily Averin <v...@virtuozzo.com>
> ---
>  net/ipv4/tcp.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 2079145a3b7c..cf9572f4fc0f 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -996,6 +996,7 @@ ssize_t do_tcp_sendpages(struct sock *sk, struct page 
> *page, int offset,
>                         goto wait_for_memory;
>
>                 if (can_coalesce) {
> +                       WARN_ON_ONCE(PageSlab(page));

Please use VM_WARN_ON_ONCE() to make this a nop for CONFIG_VM_DEBUG=n

Also the whole tcp_sendpage() should be protected, not only the coalescing part.

(The get_page()  done few lines later should not be attempted either)

>                         skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], 
> copy);
>                 } else {
>                         get_page(page);
> --
> 2.17.1
>

It seems the bug has nothing to do with TCP, and belongs to the caller.

Otherwise you need to add the check to all existing .sendpage() /
.sendpage_locked() handler out there.

Reply via email to