On Sun, 25 Aug 2013, Mike Stump wrote: > On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandif...@googlemail.com> > wrote: > > We really need to get rid of the #include "tm.h" in wide-int.h. > > MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent > > thing in there. If that comes from tm.h then perhaps we should put it > > into a new header file instead. > > BITS_PER_UNIT comes from there as well, and I'd need both. Grabbing the > #defines we generate is easy enough, but BITS_PER_UNIT would be more > annoying. No port in the tree makes use of it yet (other than 8). So, > do we just assume BITS_PER_UNIT is 8?
Regarding avoiding tm.h dependence through BITS_PER_UNIT (without actually converting it from a target macro to a target hook), see my suggestions at <http://gcc.gnu.org/ml/gcc-patches/2010-11/msg02617.html>. It would seem fairly reasonable, if in future other macros are converted to hooks and it's possible to build multiple back ends into a single compiler binary, to require that all such back ends share a value of BITS_PER_UNIT. BITS_PER_UNIT describes the number of bits in QImode - the RTL-level byte. I don't think wide-int should care about that at all. As I've previously noted, many front-end uses of BITS_PER_UNIT really care about the C-level char and so should be TYPE_PRECISION (char_type_node). Generally, before thinking about how to get BITS_PER_UNIT somewhere, consider whether the code is actually correct to be using BITS_PER_UNIT at all - whether it's the RTL-level QImode that is really what's relevant to the code. -- Joseph S. Myers jos...@codesourcery.com