On 01/05/2017 02:53 PM, Martin Sebor wrote:
When the size passed to a call to a function like memcpy is a signed
integer whose range has a negative lower bound and a positive upper
bound the lower bound of the range of the argument after conversion
to size_t may be in excess of the maximum object size (PTRDIFF_MAX
by default).  This results in -Wstringop-overflow false positives.
Is this really a false positive though? ISTM that if the testcase were compiled for a 32 bit target, then all hell would break loose if g::n was 0xffffffff (unsigned 32bit).






The attached patch detects this case and avoids the problem.

Btw., I had a heck of a time creating a test case for this.  The
large translation unit submitted with the bug is from a file in
the Linux kernel of some complexity.  There, the range of the int
variable (before conversion to size_t) is [INT_MIN, INT_MAX].  It
seems very difficult to create a VR_RANGE for a signed int that
matches it.  The test case I came up with that still reproduces
the false positive crates an anti-range for the signed int argument
by converting an unsigned int in one range to a signed int and
constraining it to another range.  The false positive is avoided
because the code doesn't (yet) handle anti-ranges.
I'd think to create [INT_MIN, INT_MAX] you'd probably need a meet at a PHI node that wasn't trivially representable and thus would get dropped to [INT_MIN, INT_MAX]. A meet of 3 values with 2 holes for example might do what you wanted.



Martin

PS This seems lie a bug or gotcha in the get_range_info() function.
In the regression test added by the patch the VRP dump shows the
following:
Note that the ranges in VRP can be more precise than the ranges seen outside VRP. The warning is being emitted at the gimple->rtl phase, so you may be stumbling over one of the numerous problems with losing good range information.


Jeff

Reply via email to