https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109504
--- Comment #7 from Hongtao.liu <crazylht at gmail dot com> --- (In reply to Hongtao.liu from comment #6) > (In reply to Jakub Jelinek from comment #4) > > Yeah. Enable all the time and have say the > > targetm.invalid_conversion, targetm.invalid_unary_op, > > targetm.invalid_binary_op > > and something in argument/return value passing reject _Float16/__bf16 in > > functions without SSE2. > > That will not be enough though, we'll need to arrange e.g. for the spot > > where we #undef/#define target macros based on currently active ISA in > > pragmas to also > > do that for __STDCPP_FLOAT16_T__ and __STDCPP_BFLOAT16_T__ for C++, and > > change libstdc++ > > such that for x86 it adds similarly to x86 intrin headers something like > Can we just cpp_undef _STDCPP_FLOAT16_T__ and __STDCPP_BFLOAT16_T__ for C++ > in ix86_target_macros when !TARGET_SSE2 so that no need to change libstdc++ > part. Also need to undef __LIBGCC_HAS_%d_MODE__, __LIBGCC_%d_FUNC_EXT__,__LIBGCC_%d_MANT_DIG__, __LIBGCC_%d_EXCESS_PRECISION__, __LIBGCC_%d_EPSILON__, __LIBGCC_%d_MAX__, __LIBGCC_%d_MIN__ Which are used for building libgcc(found in libbid) And then ix86_emit_support_tinfos is not needed any more since type is always supported? Or just adjust to - gcc_checking_assert (!float16_type_node && !bfloat16_type_node); - float16_type_node = ix86_float16_type_node; - bfloat16_type_node = ix86_bf16_type_node; + float16_type_node + = float16_type_node ? float16_type_node : ix86_float16_type_node; + bfloat16_type_node + = bfloat16_type_node ? bfloat16_type_node : ix86_bf16_type_node;