On Thu, 21 Dec 2023, Jakub Jelinek wrote: > Hi! > > libubsan still doesn't support bitints, so ubsan contains a workaround and > emits value 0 and TK_Unknown kind for those. If shift second operand has > the large/huge _BitInt type, this results in internal errors in libubsan > though, so the following patch provides a temporary workaround for that > - in the rare case where the last operand has _BitInt type wider than > __int128 (or long long on 32-bit arches), it will pretend the shift count > has that type saturated to its range. IMHO better than crashing in > the library. If the value fits into the __int128 (or long long) range, > it will be printed correctly (just print that it has __int128/long long > type rather than say _BitInt(255)), if it doesn't, user will at least > know that it is a very large negative or very large positive value. > > Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
OK. > 2023-12-21 Jakub Jelinek <ja...@redhat.com> > > PR sanitizer/113092 > * c-ubsan.cc (ubsan_instrument_shift): Workaround for missing > ubsan _BitInt support for the shift count. > > * gcc.dg/ubsan/bitint-4.c: New test. > > --- gcc/c-family/c-ubsan.cc.jj 2023-09-06 17:29:47.917880370 +0200 > +++ gcc/c-family/c-ubsan.cc 2023-12-20 15:13:53.515851635 +0100 > @@ -256,6 +256,32 @@ ubsan_instrument_shift (location_t loc, > tt = build_call_expr_loc (loc, builtin_decl_explicit (BUILT_IN_TRAP), 0); > else > { > + if (TREE_CODE (type1) == BITINT_TYPE > + && TYPE_PRECISION (type1) > MAX_FIXED_MODE_SIZE) > + { > + /* Workaround for missing _BitInt support in libsanitizer. > + Instead of crashing in the library, pretend values above > + maximum value of normal integral type or below minimum value > + of that type are those extremes. */ > + tree type2 = build_nonstandard_integer_type (MAX_FIXED_MODE_SIZE, > + TYPE_UNSIGNED (type1)); > + tree op2 = op1; > + if (!TYPE_UNSIGNED (type1)) > + { > + op2 = fold_build2 (LT_EXPR, boolean_type_node, unshare_expr (op1), > + fold_convert (type1, TYPE_MIN_VALUE (type2))); > + op2 = fold_build3 (COND_EXPR, type2, op2, TYPE_MIN_VALUE (type2), > + fold_convert (type2, unshare_expr (op1))); > + } > + else > + op2 = fold_convert (type2, op1); > + tree op3 > + = fold_build2 (GT_EXPR, boolean_type_node, unshare_expr (op1), > + fold_convert (type1, TYPE_MAX_VALUE (type2))); > + op1 = fold_build3 (COND_EXPR, type2, op3, TYPE_MAX_VALUE (type2), > + op2); > + type1 = type2; > + } > tree utd0 = ubsan_type_descriptor (type0, UBSAN_PRINT_FORCE_INT); > tree data = ubsan_create_data ("__ubsan_shift_data", 1, &loc, utd0, > ubsan_type_descriptor (type1), NULL_TREE, > --- gcc/testsuite/gcc.dg/ubsan/bitint-4.c.jj 2023-12-20 15:24:18.908027190 > +0100 > +++ gcc/testsuite/gcc.dg/ubsan/bitint-4.c 2023-12-20 15:24:14.681086830 > +0100 > @@ -0,0 +1,22 @@ > +/* PR sanitizer/113092 */ > +/* { dg-do run { target bitint575 } } */ > +/* { dg-options "-fsanitize=shift -fsanitize-recover=shift" } */ > + > +int > +main () > +{ > + volatile _BitInt(255) bi = 12984732985743985734598574358943wb; > + bi = 0 >> bi; > + bi = > 329847329847239847239847329847239857489657986759867549867594875984375wb; > + bi = 0 >> bi; > + bi = -12984732985743985734598574358943wb; > + bi = 0 >> bi; > + bi = > -329847329847239847239847329847239857489657986759867549867594875984375wb; > + bi = 0 >> bi; > + return 0; > +} > + > +/* { dg-output "shift exponent \[0-9a-fx]* is too large for \[0-9]*-bit type > 'int'\[^\n\r]*(\n|\r\n|\r)" } */ > +/* { dg-output "\[^\n\r]*shift exponent \[0-9a-fx]* is too large for > \[0-9]*-bit type 'int'\[^\n\r]*(\n|\r\n|\r)" } */ > +/* { dg-output "\[^\n\r]*shift exponent \[0-9a-fx-]* is > negative\[^\n\r]*(\n|\r\n|\r)" } */ > +/* { dg-output "\[^\n\r]*shift exponent \[0-9a-fx-]* is negative" } */ > > Jakub > > -- Richard Biener <rguent...@suse.de> SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg, Germany; GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)