On Tue, Jan 08, 2019 at 02:56:44PM +0100, Martin Liška wrote: > --- a/gcc/tree-switch-conversion.c > +++ b/gcc/tree-switch-conversion.c > @@ -100,6 +100,7 @@ switch_conversion::collect (gswitch *swtch) > max_case = gimple_switch_label (swtch, branch_num - 1); > > m_range_min = CASE_LOW (min_case); > + gcc_assert (operand_equal_p (TYPE_SIZE (TREE_TYPE (m_range_min)), > TYPE_SIZE (TREE_TYPE (m_index_expr)), 0)); > if (CASE_HIGH (max_case) != NULL_TREE) > m_range_max = CASE_HIGH (max_case); > else > > and I haven't triggered the assert. > > > > > With using just the constructor elt type, do you count on the analysis to > > fail if starting with casting the index to the elt type (or unsigned variant > > thereof) affects the computation? > > So hopefully the situation can't happen. Note that if it happens we should not > generate wrong-code, but we miss an opportunity.
The situation can happen very easily, just use int foo (long long x) { int ret; switch (x) { case 1234567LL: ret = 123; break; ... } } What I was wondering if doing the computation in the wider (index) type and then casting to the narrower (ctor value) type could ever optimize something that doing it on the narrower type can't. Say, if index type is unsigned int and elt0 type is unsigned char, if (a * i + b) % 256 could be the ctor sequence, but one couldn't find c, d in [0, 255] that (c * (i % 256) + d) % 256 == (a * i + b) % 256. But don't c = a % 256 and d = b % 256 satisfy that? Jakub