https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109303
--- Comment #5 from Martin Jambor <jamborm at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #4)
> --- gcc/ipa-cp.cc.jj 2023-03-14 19:12:19.949553036 +0100
> +++ gcc/ipa-cp.cc 2023-03-29 18:32:34.148888423 +0200
> @@ -3117,7 +3117,9 @@ propagate_aggs_across_jump_function (str
> {
> HOST_WIDE_INT val_size;
>
> - if (item->offset < 0 || item->jftype == IPA_JF_UNKNOWN)
> + if (item->offset < 0
> + || item->jftype == IPA_JF_UNKNOWN
> + || item->offset >= (HOST_WIDE_INT) UINT_MAX * BITS_PER_UNIT)
> continue;
> val_size = tree_to_shwi (TYPE_SIZE (item->type));
>
> fixes the ICE and is then similar to the PR108605 fix. Dunno if the code
> can overflow also offset + size computations or something protects against
> that.
> Anyway, I think it would be worth it to switch all those unsigned int byte
> offsets to
> unsigned HOST_WIDE_INTs for GCC 14.
Actually, I am in the process of doing the reverse in order to try and
keep the memory footprint of the structures small. (The reason why
the HOST_WIDE_BITs are signed is what get_ref_base_and_extent used to
return). Unfortunately what I wanted to do but forgot is the
following (only lightly tested so far, it has the benefit that
uselessly large offsets are not even streamed):
diff --git a/gcc/ipa-prop.cc b/gcc/ipa-prop.cc
index de45dbccf16..edc1f469914 100644
--- a/gcc/ipa-prop.cc
+++ b/gcc/ipa-prop.cc
@@ -1735,6 +1735,8 @@ build_agg_jump_func_from_list (struct
ipa_known_agg_contents_list *list,
item.offset = list->offset - arg_offset;
gcc_assert ((item.offset % BITS_PER_UNIT) == 0);
+ if (item.offset + list->size >= (HOST_WIDE_INT) UINT_MAX *
BITS_PER_UNIT)
+ continue;
jfunc->agg.items->quick_push (item);
}