http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56924



             Bug #: 56924

           Summary: Folding of checks into a range check should check

                    upper boundary

    Classification: Unclassified

           Product: gcc

           Version: 4.9.0

            Status: UNCONFIRMED

          Severity: normal

          Priority: P3

         Component: c

        AssignedTo: unassig...@gcc.gnu.org

        ReportedBy: josh.m.con...@gmail.com





When we are performing folding of checks into a range check, if the values are

at the top-end of the range we should just use a > test instead of normalizing

them into the bottom of the range and using a < test.



For example, consider:



  struct stype {

    unsigned int pad:4;

    unsigned int val:4;

  };



  void bar (void);



  void foo (struct stype input)

  {

    if ((input.val == 0xe) || (input.val == 0xf))

      bar();

  }





When compiled at -O2, the original tree generated is:





  ;; Function foo (null)

  ;; enabled by -tree-original





  {

    if (input.val + 2 <= 1)

      {

        bar ();

      }

  }



This is likely to be more efficient if we instead generate:



    if (input.val >= 0xe)

      {

        bar ();

      }



This can be seen in the inefficient codegen for an ARM cortex-a15:



        ubfx    r0, r0, #4, #4

        add     r3, r0, #2

        and     r3, r3, #15

        cmp     r3, #1



(the add and the and are not necessary if we change the test condition).



I was able to improve this by adding detection of this case into

build_range_check.

Reply via email to