On 03/06/2017 12:06 PM, P J P wrote:
> +-- On Mon, 6 Mar 2017, Eric Blake wrote --+
> | On 03/06/2017 01:17 AM, P J P wrote:
> | > Arguments passed to execve(2) call from user program could
> | > be large, allocating stack memory for them via alloca(3) call
> | > would lead to bad behaviour. Use 'g_malloc0' to allocate memory
> | > for such arguments.
> | > 
> | > Signed-off-by: Prasad J Pandit <p...@fedoraproject.org>
> | > ---
> | >  linux-user/syscall.c | 7 +++++--
> | >  1 file changed, 5 insertions(+), 2 deletions(-)
> | 
> | Is this patch alone (without 1/2) sufficient to solve the problem?  If
> | so, then drop 1/2.
> 
>   Yes, it seems to fix the issue. Still I think having ARG_MAX limit would be 
> good, as system exec(3) routines too impose _SC_ARG_MAX limit. I'll send a 
> revised patch with 'g_try_new' call instead of g_malloc0.

If you impose any limit smaller than _SC_ARG_MAX, you are needlessly
limiting things.  Furthermore, _SC_ARG_MAX may not always be the same
value, depending on how the kernel was compiled.  So it's probably
asiest to just let execve() impose its own limits (and correctly report
errors to the caller when execve() fails), rather than trying to impose
limits yourself.

In short, the bug that you are fixing is not caused by the guest
requesting something beyond execve() limits, but caused by poor use of
alloca() leading to a stack overrun.  You only need to fix the bug (by
switching alloca() to heap allocation), not introduce additional ones
(by imposing arbitrary, and probably wrong, limits).

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to