The basic idea seems fine, but isn't that off by a factor of 2?  It defines
size_t_bits_minus_2 = sizeof (size_t) * CHAR_BIT - 2
and then defines SIZE_MAX to (((1U << $size_t_bits_minus_2) - 1) * 2 + 1).
Unless I'm missing something, on a 32-bit host, that will set SIZE_MAX
to 2147483647 instead of the correct value.

Instead, how about if we compute
size_t_bits = sizeof (size_t) * CHAR_BIT
and then define SIZE_MAX to (((1U << ($size_t_bits - 1)) - 1) * 2 + 1)?


Reply via email to