https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122900

            Bug ID: 122900
           Summary: LTO ICE while linking module without PGO with LTO+PGO
           Product: gcc
           Version: 15.2.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: lto
          Assignee: unassigned at gcc dot gnu.org
          Reporter: jan.zizka at nokia dot com
  Target Milestone: ---

We are facing LTO ICE in our C++ application while compiling with gcc 15.2.0
with PGO
when a new module is included in build, and which was not instrumented and not
run with
-fprofile-generate but is linked with previously created PGO for application
without
the new module.


 $ lto1 -quiet -dumpbase ./libxxx.so.ltrans13.ltrans -m64 -mtune=generic
-march=core2 -g -gz=zlib -O3 -fno-openmp -fno-openacc -fcf-protection=none
-fasynchronous-unwind-tables -fPIC -fprofile-use -fprofile-partial-training
-fltrans @./libxxx.so.ltrans13.ltrans.args.0 -o ./libxxx.so.ltrans13.ltrans.s
-freport-bug

during IPA pass: inline
<src path>/newmodule.cpp: In member function '__ct_base ':
<src path>/newmodule.cpp:62:41: internal compiler error: Segmentation fault
0x1cd673f internal_error(char const*, ...)
        ../../gcc/gcc/diagnostic-global-context.cc:517
0x9cb4df crash_signal
        ../../gcc/gcc/toplev.cc:322
0xa48764 cgraph_edge::speculative_call_indirect_edge()
        ../../gcc/gcc/cgraph.h:1789
0xa48764 copy_bb
        ../../gcc/gcc/tree-inline.cc:2318
0xa494c2 copy_cfg_body
        ../../gcc/gcc/tree-inline.cc:3126
0xa494c2 copy_body
        ../../gcc/gcc/tree-inline.cc:3379
0xa4e113 expand_call_inline
        ../../gcc/gcc/tree-inline.cc:5220
0xa4fb21 gimple_expand_calls_inline
        ../../gcc/gcc/tree-inline.cc:5431
0xa4fb21 optimize_inline_calls(tree_node*)
        ../../gcc/gcc/tree-inline.cc:5621
0x748fa3 inline_transform(cgraph_node*)
        ../../gcc/gcc/ipa-inline-transform.cc:808
0x8ba245 execute_one_ipa_transform_pass
        ../../gcc/gcc/passes.cc:2346
0x8ba245 execute_all_ipa_transforms(bool)
        ../../gcc/gcc/passes.cc:2409
0x51d672 cgraph_node::expand()
        ../../gcc/gcc/cgraphunit.cc:1852
0x51d672 cgraph_node::expand()
        ../../gcc/gcc/cgraphunit.cc:1812
0x51e40c expand_all_functions
        ../../gcc/gcc/cgraphunit.cc:2018
0x51e40c symbol_table::compile()
        ../../gcc/gcc/cgraphunit.cc:2418
0x47a819 lto_main()
        ../../gcc/gcc/lto/lto.cc:693
Please submit a full bug report, with preprocessed source.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.


The caller->indirect_calls is NULL in speculative_call_indirect_edge:


(gdb) bt
#0  cgraph_edge::speculative_call_indirect_edge (this=0x7ffff49e8000) at
../../gcc/gcc/cgraph.h:1789
#1  copy_bb (id=id@entry=0x7fffffffd690, bb=bb@entry=0x7ffff1692240, num=...,
den=...) at ../../gcc/gcc/tree-inline.cc:2318
#2  0x0000000000a494c3 in copy_cfg_body (id=0x7fffffffd690,
entry_block_map=0x7ffff1656e40, exit_block_map=0x7ffff16922a0, new_entry=0x0)
at ../../gcc/gcc/tree-inline.cc:3126
#3  copy_body (id=id@entry=0x7fffffffd690,
entry_block_map=entry_block_map@entry=0x7ffff1656e40,
exit_block_map=exit_block_map@entry=0x7ffff16922a0,
new_entry=new_entry@entry=0x0)
    at ../../gcc/gcc/tree-inline.cc:3379
#4  0x0000000000a4e114 in expand_call_inline (bb=<optimized out>,
bb@entry=0x7ffff1656e40, stmt=<optimized out>, id=id@entry=0x7fffffffd690,
to_purge=to_purge@entry=0x7fffffffd670)
    at ../../gcc/gcc/tree-inline.cc:5220
#5  0x0000000000a4fb22 in gimple_expand_calls_inline (bb=0x7ffff1656e40,
id=0x7fffffffd690, to_purge=0x7fffffffd670) at
../../gcc/gcc/tree-inline.cc:5431
#6  optimize_inline_calls (fn=0x7ffff58c9e00) at
../../gcc/gcc/tree-inline.cc:5621
#7  0x0000000000748fa4 in inline_transform (node=0x7ffff4933550) at
../../gcc/gcc/ipa-inline-transform.cc:808
#8  0x00000000008ba246 in execute_one_ipa_transform_pass (node=<optimized out>,
ipa_pass=0x2d01df0, do_not_collect=<optimized out>) at
../../gcc/gcc/passes.cc:2346
#9  execute_all_ipa_transforms (do_not_collect=do_not_collect@entry=false) at
../../gcc/gcc/passes.cc:2409
#10 0x000000000051d673 in cgraph_node::expand (this=0x7ffff4933550) at
../../gcc/gcc/cgraphunit.cc:1852
#11 cgraph_node::expand (this=0x7ffff4933550) at
../../gcc/gcc/cgraphunit.cc:1812
#12 0x000000000051e40d in expand_all_functions () at
../../gcc/gcc/cgraphunit.cc:2018
#13 symbol_table::compile (this=0x7ffff7606000) at
../../gcc/gcc/cgraphunit.cc:2418
#14 0x000000000051f346 in symbol_table::compile (this=<optimized out>) at
../../gcc/gcc/cgraphunit.cc:2456
#15 0x000000000047a81a in lto_main () at ../../gcc/gcc/lto/lto.cc:693
#16 0x00000000009cb5ae in compile_file () at ../../gcc/gcc/toplev.cc:452
#17 0x00000000004479a0 in do_compile () at ../../gcc/gcc/toplev.cc:2208
#18 toplev::main (this=this@entry=0x7fffffffda4e, argc=<optimized out>,
argc@entry=21, argv=<optimized out>, argv@entry=0x7fffffffdb88) at
../../gcc/gcc/toplev.cc:2371
#19 0x00000000004492bb in main (argc=21, argv=0x7fffffffdb88) at
../../gcc/gcc/main.cc:39

(gdb) print *caller
$5 = {<symtab_node> = {type = SYMTAB_FUNCTION, resolution = LDPR_UNKNOWN,
definition = 1, alias = 0, transparent_alias = 0, weakref = 0,
cpp_implicit_alias = 0, symver = 0, analyzed = 1, writeonly = 0, 
    refuse_visibility_changes = 0, externally_visible = 0, no_reorder = 0,
force_output = 0, forced_by_abi = 0, unique_name = 0, implicit_section = 0,
body_removed = 0, semantic_interposition = 1, 
    used_from_other_partition = 0, in_other_partition = 0, address_taken = 0,
in_init_priority_hash = 0, need_lto_streaming = 0, offloadable = 0,
ifunc_resolver = 0, order = 2207, decl = 0x7ffff58c9f00, 
    next = 0x7ffff4920f80, previous = 0x7ffff4933880, next_sharing_asm_name =
0x0, previous_sharing_asm_name = 0x0, same_comdat_group = 0x0, ref_list =
{references = {m_vec = 0x2de3480}, referring = {
        m_vec = 0x0}}, alias_target = 0x0, lto_file_data = 0x0, aux = 0x0,
x_comdat_group = 0x0, x_section = 0x0, m_uid = 589}, callees = 0x7ffff49e8068,
callers = 0x7ffff49e7f08, indirect_calls = 0x0, 
  next_sibling_clone = 0x0, prev_sibling_clone = 0x0, clones = 0x0, clone_of =
0x0, call_site_hash = 0x0, former_clone_of = 0x0, simdclone = 0x0, simd_clones
= 0x0, ipa_transforms_to_apply = {m_vec = 0x0}, 
  inlined_to = 0x7ffff4933550, rtl = 0x0, count = {static n_bits = 61, static
max_count = 2305843009213693950, static uninitialized_count =
2305843009213693951, m_val = 2, m_quality = PRECISE}, 
  count_materialization_scale = 10000, profile_id = 0, unit_id = 132,
tp_first_run = 5613, thunk = 0, used_as_abstract_origin = 0, lowered = 0,
process = 0, frequency = NODE_FREQUENCY_NORMAL, 
  only_called_at_startup = 0, only_called_at_exit = 0, tm_clone = 0,
dispatcher_function = 0, calls_comdat_local = 0, icf_merged = 0, nonfreeing_fn
= 0, merged_comdat = 0, merged_extern_inline = 0, 
  parallelized_function = 0, split_part = 0, indirect_call_target = 0, local =
1, versionable = 1, can_change_signature = 1, redefined_extern_inline = 0,
tm_may_enter_irr = 0, ipcp_clone = 0, gc_candidate = 0, 
  called_by_ifunc_resolver = 0, has_omp_variant_constructs = 0, m_summary_id =
-1}


We don't see this problem when using gcc 14.2.1, we are working on a reproducer
which we could
share, but at the moment we are not able to strip down the sources to minimum.
We are also
working on running bisecting if we could find which change in gcc between
14.2.1 and 15.2.0
would make the difference, but due to size of our code base this will take some
time.

While working on reproducer would above backtrace give some hint how to fix
this as this is
causing build failures for us. We can workaround this by -fno-profile-use while
compiling the
newly added module. We know that we should generate the PGO with the actual
build but due
to size of the codebase we are running the build with previously generated PGOs
in early
pipeline stages to optimize pipeline build times. This worked sofar with gcc up
to 14.2.1
it could be likely by accident or then there is some kind of regression in gcc
15.2.0.

I can provide any other debug data if needed until we'll have reproducer we'd
be able to share.

Reply via email to