Hello, Michael Banck via Bug reports for the GNU Hurd, le mer. 24 sept. 2025 10:41:59 +0200, a ecrit: > > I'd tend to say this is an issue in postgresql: it shouldn't assume that > > clocks have infinite precision. > > I guess there is a spectrum here - certainly infinte precision is > unrealistic, but the question is what kind of minimum timer precision > applications can require
Well, looking at 1ns measurements is very akin to infinite precision, since that's exactly the order of an CPU instruction... > Where as on my Linux Thinkpad host, I see this: > > $ LANG=C ./pg_test_timing > Testing timing overhead for 3 seconds. > Average loop time including overhead: 13.84 ns > Histogram of timing durations: > <= ns % of total running % count > 0 0.0000 0.0000 0 > 1 0.0000 0.0000 0 > 3 0.0000 0.0000 0 > 7 0.0000 0.0000 0 > 15 97.3170 97.3170 210936922 Possibly Linux uses HPET as *only* source of time, not using actual ticks from the clock. I don't know if the HPET has more or less drift than the RTC, so if it's a better or worse idea without an NTP to compensate the drift, but that'd indeed hopefully provide a value that increases on each few CPU instructions. It's "a matter" of reworking kern/mach_clock.c's clock_interrupt to adjust the time from the hpet rather than just increasing by one tick. > Those 0ns on qemu are the problem for the (probably artificial) stats > isolation test, but the Postgres hackers are also very unhappy about > proposing random sleep delays in their testsuite. Not introducing some delay would mean that with e.g. a 100GHz CPU it'd surely just fail. Of course such speed won't actually exist, but this is not just a theoretical case, it's something that could be built, just way too expensive to come on the market, but that shows that yes, some delay does make sense. If they really want to see the clock advance for *sure*, they'd typically have to wait for two clock_getres() durations. Otherwise, no standard will guarantee that time will have advanced. With cheers, Samuel
