On 09/14/2018 12:17 PM, David Malcolm wrote:
gcc/ChangeLog:
        * pretty-print.c (get_power_of_two): New function.
        (struct relative_to_power_of_2): New struct.
        (pp_humanized_limit): New function.
        (pp_humanized_range): New function.
        (selftest::assert_pp_humanized_limit): New function.
        (ASSERT_PP_HUMANIZED_LIMIT): New macro.
        (selftest::assert_pp_humanized_range): New function.
        (ASSERT_PP_HUMANIZED_RANGE): New macro.
        (selftest::test_get_power_of_two): New function.
        (selftest::test_humanized_printing): New function.
        (selftest::pretty_print_c_tests): Call them.
        * pretty-print.h (pp_humanized_limit): New function.
        (pp_humanized_range): New function.
...
+static void
+test_humanized_printing ()
+{
+  ASSERT_PP_HUMANIZED_LIMIT ((1<<16) + 1, "(1<<16)+1");
+  ASSERT_PP_HUMANIZED_LIMIT (100000, "100000");
+  ASSERT_PP_HUMANIZED_LIMIT (1<<17, "1<<17");
+  ASSERT_PP_HUMANIZED_LIMIT ((1<<17) - 1, "(1<<17)-1");
+  ASSERT_PP_HUMANIZED_LIMIT ((1<<17) + 1, "(1<<17)+1");
+  ASSERT_PP_HUMANIZED_LIMIT (4294967295, "(1<<32)-1");
+
+  ASSERT_PP_HUMANIZED_RANGE (0, 0, "0");
+  ASSERT_PP_HUMANIZED_RANGE (0, 1, "0...1");
+  ASSERT_PP_HUMANIZED_RANGE ((1<<16), (1<<17), "65536...1<<17");

I didn't comment on this aspect of the change yesterday: besides
making very large numbers easier for humans to understand, my hope
was to also help avoid data model dependencies in the test suite.

When testing on any given host it's easy to forget that a constant
may have a different value in a different data model and hardcode
a fixed number.  The test then fails for targets where the constant
has a different value.

As it is, the sprintf (and other) tests deal with the problem like
this:

T (-1, "%*cX", INT_MAX, '1'); /* { dg-warning "output of \[0-9\]+ bytes causes result to exceed .INT_MAX." } */

I.e., by accepting any sequence of digits where a large constant
like INT_MAX is expected.  This works fine but doesn't verify that
the value printed is correct.

One approach to dealing with this is to use manifest constants
in the diagnostic output that are independent of the data model,
like INT_MAX, SIZE_MAX, etc.

Using shift expressions instead doesn't solve this problem,
but even beyond that, I wouldn't consider shift expressions
like "(1 << 16) + 1" to be significantly more readable than
their value.  IMO, it overestimates the capacity of most
programmers, to do shift arithmetic in their heads ;-)

Do you see a problem with using the <limits.h> constants?

Martin

Reply via email to