** Changed in: uptimed (Debian)
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/498439
Title:
uprecords reports >100% uptime
To manage notifications about thi
I've also encountered this bug in current release of CentOS 7.4.1708
(3.10.0-693.11.1.el7.x86_64).
I'm running uptimed on a few units as well (Ubuntu 17.10 4.13.0-16-generic &
MacOSX 10.11.6 [15G17023] 15.6.0) - and I havn't had any problems with negative
uptime reports from those.
This problem/
This bug is present in the current release of ubuntu:
# Uptime | System Boot up
+---
112 days, 18:05:05 | Linux 4.4.0-92-genericWed Aug 16 15:41:17 2017
** Changed in: uptimed (Debian)
Status: Unknown => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/498439
Title:
uprecords reports >100% uptime
To manage notifications about this bug
** Bug watch added: Debian Bug tracker #654830
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=654830
** Also affects: uptimed (Debian) via
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=654830
Importance: Unknown
Status: Unknown
--
You received this bug notification because y
I have the same issue on a raspberry pi:
pi@raspberrypi:~$ uprecords
# Uptime | System Boot up
+---
1 403 days, 04:46:21 | Linux 3.12.31+Thu Nov 6
I have the same problem, running on a KVM vm.
Is there no easy way just to zero negative values ?
Do i understand the problem correctly that by the time the server
reboots, the clock has moved forward, and then during the reboot it is
reset (moving back in time) and then when the server comes up
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: uptimed (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/498439
Title:
upr
the way towards fixing this bug would involve
clock_gettime(CLOCK_MONOTONIC_RAW, const struct timespec
*monotonictime);
which is linux 2.6.28+ specific, and is not subject to change due to
ntpdate and friends.
see man 2 clock_gettime
I'm not sure we should just replace sysinfo->uptime however,
whats going on here is a difference between the monotonic time reported
by /proc/uptime and the hwclock which does a few of the other numbers
(not sure specifics) . I think this all just means your monotonic (cpu-
based) clock is running alittle fast compared to the RTC (battery-
powered clock). W
10 matches
Mail list logo