Hi!

On the testcase in the PR we ICE because we create non-canonical CONST_INTs
(e.g. for HImode niter 0xfffe) and the compare_and_jump_seq code then ICEs
on that.

Fixed thusly, bootstrapped/regtested on x86_64-linux and i686-linux, ok for
trunk?

Testcase not included, because it creates 480000 basic blocks and we handle
that with O(n^2) complexity.

2019-03-19  Jakub Jelinek  <ja...@redhat.com>

        PR rtl-optimization/89768
        * loop-unroll.c (unroll_loop_constant_iterations): Use gen_int_mode
        instead of GEN_INT.
        (unroll_loop_runtime_iterations): Likewise.

--- gcc/loop-unroll.c.jj        2019-03-19 09:09:32.686006683 +0100
+++ gcc/loop-unroll.c   2019-03-19 10:15:50.319343904 +0100
@@ -652,7 +652,7 @@ unroll_loop_constant_iterations (struct
   if (loop->any_likely_upper_bound)
     loop->nb_iterations_likely_upper_bound
       = wi::udiv_trunc (loop->nb_iterations_likely_upper_bound, max_unroll + 
1);
-  desc->niter_expr = GEN_INT (desc->niter);
+  desc->niter_expr = gen_int_mode (desc->niter, desc->mode);
 
   /* Remove the edges.  */
   FOR_EACH_VEC_ELT (remove_edges, i, e)
@@ -1020,9 +1020,9 @@ unroll_loop_runtime_iterations (struct l
       preheader = split_edge (loop_preheader_edge (loop));
       /* Add in count of edge from switch block.  */
       preheader->count += iter_count;
-      branch_code = compare_and_jump_seq (copy_rtx (niter), GEN_INT (j), EQ,
-                                         block_label (preheader), p,
-                                         NULL);
+      branch_code = compare_and_jump_seq (copy_rtx (niter),
+                                         gen_int_mode (j, desc->mode), EQ,
+                                         block_label (preheader), p, NULL);
 
       /* We rely on the fact that the compare and jump cannot be optimized out,
         and hence the cfg we create is correct.  */

        Jakub

Reply via email to