https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64432
Francois-Xavier Coudert <fxcoudert at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|unassigned at gcc dot gnu.org |fxcoudert at gcc dot gnu.org --- Comment #4 from Francois-Xavier Coudert <fxcoudert at gcc dot gnu.org> --- I'm not sure this is a bug, but this was definitely by design (as the comment indicates). I think this is allowed by the successive standards (which are, in any case, very weakly worded). The root of the problem is that we want to allow for SYSTEM_CLOCK to return high-precision values for large integer kinds, and fall back to lower-precision results that fit in fewer bytes for smaller integer kinds. Thus, one should call SYSTEM_CLOCK once with all the necessary arguments, and not multiple times with varying argument types. The only other consistent option I can see would be to simply go for high-resolution results in all cases, but that would mean that SYSTEM_CLOCK with 32-bit integers would wrap around in less than an hour. If you have another idea, please post a list of what you think should happen in all various cases (all possible combinations of arguments have to be allowed).