UTC is uniform but discontinuous. If someone wanted to precisely and
reliably measure time between to points, they would need a uniform,
continuous standard such as TAI.

TAI can be implemented using the defined relation:
TAI = UTC + 10s + Announced leap seconds since 1972 (Published here: 
http://maia.usno.navy.mil/ser7/tai-utc.dat)

(24 leap seconds so far)

So if UTC incorrectly fails to insert a leap second, TAI would appear to
skip a second. I could therefore incorrectly measure a 25ms time
interval as 1025ms.


I could also implement UT1 (which is continuous but non-uniform) by the defined 
relation:
UT1 = UTC + DUT1 (Published here: http://maia.usno.navy.mil/ser7/finals.all)

See: https://en.wikipedia.org/wiki/DUT1

Again, if UTC incorrectly fails to insert a leap second, UT1 would
appear to skip a second, and incorrectly be discontinuous.


See IERS who publish the astronomical data and announce leap seconds:
http://www.iers.org/
http://maia.usno.navy.mil/


Anyway, how does it make sense to sync a clock over the network to high 
precision using time protocols, when the system's UTC can't even be relied on 
to a precision of a second?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/970966

Title:
  UTC is incorrectly implemented; it does not handle leap seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/970966/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to