https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89223

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |rguenth at gcc dot gnu.org

--- Comment #5 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
The gimplifier change would be something like:
--- gimplify.c.jj       2019-01-28 23:30:53.199762928 +0100
+++ gimplify.c  2019-02-06 17:15:35.368976785 +0100
@@ -2977,6 +2977,10 @@ gimplify_compound_lval (tree *expr_p, gi

       if (TREE_CODE (t) == ARRAY_REF || TREE_CODE (t) == ARRAY_RANGE_REF)
        {
+         if (!error_operand_p (TREE_OPERAND (t, 1))
+             && (TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (t, 1)))
+                 > TYPE_PRECISION (sizetype)))
+           TREE_OPERAND (t, 1) = fold_convert (sizetype, TREE_OPERAND (t, 1));
          /* Gimplify the dimension.  */
          if (!is_gimple_min_invariant (TREE_OPERAND (t, 1)))
            {
and besides the fears about weirdo targets I think it is the right thing even
on 32-bit targets when they use long long indexes into arrays.  After all, if
we gimplify it into pointer arithmetics, we'd cast to sizetype anyway.
Richard, thoughts on this?
Or if we fear too much, we could do that only if pointers have the same
precision as sizetype or something similar, I think weirdo targets will not
have support for > 64-bit integral types anyway.

Reply via email to