https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119170
Kang-Che Sung <Explorer09 at gmail dot com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |Explorer09 at gmail dot com --- Comment #9 from Kang-Che Sung <Explorer09 at gmail dot com> --- I'm forwarding my comment on LLVM/Clang's issue report, so that GCC developers can see it and discuss with it. ----(Forwarded comment below)---- Is there still room to make comments about the proposal? I simply make these comments based on my personal experience of writing C programs, and I do not represent GCC, Clang or any compiler developer. ### `_Widthof(Type)` The main reason (to me) for justifying the inclusion of `_Widthof` in the standard is to allow easy retrieval of bit width of **bit-precise integer types a.k.a. `_BitInt(N)`**, as the traditional expression `sizeof(Type) * CHAR_BIT` won't work for retrieving the width of them. The `_Widthof` operator working with traditional integer types is a plus but not a necessity. I think @alejandro-colomar can add the `sizeof` on `_BitInt(N)` issue as a primary motive for introducing `_Widthof` in the standard. (This doesn't change the standard text, just amendment on the Motive section.) There is one thing that's is useful to clarify (can add it in a footnote of the standard text): Should `_Widthof(bool)` be 1? By literal interpretation of the semantics it sounds like so, but it is useful to note that explicitly. ### `_Minof(Type)` and `_Maxof(Type)` I'm personally skeptical of these two. And, if I have the decision power, I wish these two operators be made as votes separate from the `_Widthof` voting. With the inclusion of `_Widthof`, and the mandating of two's complement representation for signed integers since C23, the `_Minof` and `_Maxof` expressions would be **implementable in one way only**. And that they are "hard to write correctly and review" (quote from the Motive of the proposal) doesn't apply. ```c #define IS_SIGNED(T) ((T) -1 < 1) #define MAXOF(T) ((T) ((((T) 1 << (_Widthof(T) - 1 - IS_SIGNED(T))) - 1) * 2 + 1)) #define MINOF(T) ((T) ~MAXOF(T)) /* Always works with two's complement representation */ ``` Yes, there is only one way to implement these, as like the code above. I have multiple concerns with the `_Minof` and `_Maxof` proposal currently: * There are not technically necessary, and when programs need them, they are trivially implementable. (Standardizing them as keywords would only introduce noise to the language.) * The keywords proposed `_Minof` and `_Maxof` can be easily confused with `MIN()` and `MAX()` macros that programs often defined for retrieving the least or greatest value among two or more expressions. And I see no discussions regarding that potential user (I mean programmer, not compiler writer) confusion. * Why not less confusing keywords such as `_Typemin` or `_Typemax`? No discussion of this either. That's all I think.