https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79327

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2017-02-01
                 CC|                            |jakub at gcc dot gnu.org,
                   |                            |msebor at gcc dot gnu.org
   Target Milestone|---                         |7.0
            Summary|wrong code at -O2 and       |[7 Regression] wrong code
                   |-fprintf-return-value       |at -O2 and
                   |                            |-fprintf-return-value
     Ever confirmed|0                           |1

--- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
This is clearly a bug in what I've been asking to fix earlier:
http://gcc.gnu.org/ml/gcc-patches/2016-12/msg00385.html
In particular, the code isn't structured to differentiate between computation
of the possible range of values the argument can have, then adjustment of that
range based on a possible implicit conversion due to say signed to unsigned or
vice versa conversion and finally from that range of values figure out what
values yield minimum or maximum number of characters (which is not necessarily
the range boundaries).

Right now, the code mixes that up:
1) for arg which is SSA_NAME with VR_RANGE, argmin is set to minimum and argmax
to maximum (well,
          argmin = build_int_cst (argtype, wi::fits_uhwi_p (min)
                                  ? min.to_uhwi () : min.to_shwi ());
          argmax = build_int_cst (argtype, wi::fits_uhwi_p (max)
                                  ? max.to_uhwi () : max.to_shwi ());
is also questionable code, it can throw away various bits from those values,
why doesn't it use wide_int_to_tree?); at this point argmin and argmax are the
range boundaries
2) otherwise, it computes some values, based on sign/precision of the type
etc.;
these values are not range boundaries, but argmin stands for value with
(hopefully) shortest string representation and argmax with longest string
representation
3) then it swaps them if argmin is bigger than argmax
4) then adjust_range_for_overflow is applied
5) then it picks those argmin and argmax values and for unsigned type uses
their representation lengths as minimum and maximum, for signed vice versa

So, for this testcase what happens is that we have a VR_RANGE, where the first
value is [-35791394, 35791394] and second value [0, 2147483647].
So, 1) applies, 2)/3)/4) are skipped and we figure out the range for the first
argument is [9, 9] (when it actually is [3, 9]) and second one is [2, 10]
(correct).  So the result is [11, 19] when it should have been [5, 19].
Obviously 0 has shorter representation than both -35791394 and 35791394.

Reply via email to