On 01/06/2017 01:55 PM, Jeff Law wrote:
On 01/05/2017 02:53 PM, Martin Sebor wrote:
When the size passed to a call to a function like memcpy is a signed
integer whose range has a negative lower bound and a positive upper
bound the lower bound of the range of the argument after conversion
to size_t may be in excess of the maximum object size (PTRDIFF_MAX
by default).  This results in -Wstringop-overflow false positives.
Is this really a false positive though?    ISTM that if the testcase
were compiled for a 32 bit target, then all hell would break loose if
g::n was 0xffffffff (unsigned 32bit).

You're right!  In the test case I added the warning is indeed correct.
And after spending more time going through the submitted translation
unit I think it's correct there too because of the unsigned to signed
(to unsigned) conversion.  I think I was initially looking the ranges
before the function got inlined into the caller where the problem
occurs.  I may have also misread the VRP dump (or looked at the wrong
one too).  It also doesn't help is that the warning doesn't show the
inlining stack.  Let me fix that.

I'd think to create [INT_MIN, INT_MAX] you'd probably need a meet at a
PHI node that wasn't trivially representable and thus would get dropped
to [INT_MIN, INT_MAX].  A meet of 3 values with 2 holes for example
might do what you wanted.

It would be nice to have a helper in the test suite for creating
ranges.  (Or perhaps a built-in.)

Note that the ranges in VRP can be more precise than the ranges seen
outside VRP.  The warning is being emitted at the gimple->rtl phase, so
you may be stumbling over one of the numerous problems with losing good
range information.

I think I had simply misread the ranges.

Thanks for the careful review!

Martin

Reply via email to