On Mon, 2 May 2011, Richard Guenther wrote: > This changes the code that deals with too large array sizes to > use int_fits_type_p instead of relying on the TREE_OVERFLOW setting > of the tree folder. The latter will break once we don't treat > sizetypes specially (and they keep being unsigned). > > Bootstrapped and tested on x86_64-unknown-linux-gnu, ok for trunk?
An array size in C or C++ ought to be considered to overflow (and so give an error if the size is compile-time constant) if the size of the array in bytes is greater than or equal to half the address space, because it is then no longer possible to compute differences between all array elements, and pointers to just past the end of the array, reliably as ptrdiff_t values (cf. PR 45779). Thus, overflow in a signed rather than unsigned type is what's relevant. I don't know if there's a relevant testcase in the testsuite, but the patch is OK with the addition of a testcase such as /* { dg-do compile } */ /* { dg-options "" } */ typedef __SIZE_TYPE__ size_t; extern char a[((size_t)-1 >> 1) + 1]; /* { dg-error "too large" } */ extern char b[((size_t)-1 >> 1)]; extern int c[(((size_t)-1 >> 1) + 1) / sizeof(int)]; /* { dg-error "too large" } */ extern int d[((size_t)-1 >> 1) / sizeof(int)]; supposing it passes. -- Joseph S. Myers jos...@codesourcery.com