https://gcc.gnu.org/bugzilla/show_bug.cgi?id=119141

--- Comment #7 from Jonathan Wakely <redi at gcc dot gnu.org> ---
And the note you quoted is about "implicit truncation" to a lower resolution
duration i.e. preventing milliseconds being implicitly converted to seconds,
which would lose precision.

It has nothing to do with preventing milliseconds(LLONG_MAX) from being
truncated when converted to nanoseconds **because that's impossible**. You
cannot fit LLONG_MAX * 1000000 in a long long without truncation.

See the example:

[Example 2:
  duration<int, milli> ms(3);
  duration<int, micro> us = ms;       // OK
  duration<int, milli> ms2 = us;      // error
— end example]

Reply via email to