The plethora of bug reports from the last pretest make me believe that many systems simply cannot (currently) reliably handle a millisecond difference in mtime. Despite the latest careful test that Zack implemented _AM_FILESYSTEM_TIMESTAMP_RESOLUTION, pretesters continued to report "random" errors.
My current idea is to only try for a hundredth of a second. This makes the automake test suite take a few extra minutes to run (something like 12 vs. 10, in my trials), but hopefully will generate less false positives, which would be well worth it, IMHO. It seems impossible to truly test for a capable (file)system, or otherwise figure out the underlying problem. At least, I don't have any good ideas. The whole idea of the am test command $sleep seems like a bogus workaround to me. On the other hand, failing a true fix, I think it needs to be added in many more places, with the advent of subsecond mtimes. Debugging all this is nowhere on my list :(. Anyway, if anyone has any ideas about how to better proceed, please speak up. Else I'll install this trivial patch, and we'll see how the next pretest goes. --thanks, karl. --- a/m4/sanity.m4 +++ b/m4/sanity.m4 @@ -34,7 +34,7 @@ am_cv_filesystem_timestamp_resolution=2 # Only try to go finer than 1s if sleep can do it. am_try_resolutions=1 if $am_cv_sleep_fractional_seconds; then - am_try_resolutions="0.001 0.01 0.1 $am_try_resolutions" + am_try_resolutions="0.01 0.1 $am_try_resolutions" fi # In order to catch current-generation FAT out, we must *modify* files