On Sun, 13 Apr 2025 at 13:22:21 +0200, Santiago Vila wrote:
After building all the archive (trixie/sid) with nocheck, I only found 33 new
packages which fail to build with nocheck that were not reported before. 
Admittedly
a little bit more than I expected, but certainly not "hundreds" as some people 
feared.

(The main reason there are not so many is that Helmut Grohne has been reporting 
those
every now and then).

I think there are two subtly different things that you could mean by "with nocheck":

1. DEB_BUILD_OPTIONS=nocheck, but no special build profiles
    - therefore <!nocheck> build-dependencies are still installed
2. DEB_BUILD_OPTIONS=nocheck DEB_BUILD_PROFILES=nocheck
    - therefore <!nocheck> build-dependencies are likely to be missing

(DEB_BUILD_PROFILES=nocheck without DEB_BUILD_OPTIONS=nocheck is not allowed by https://wiki.debian.org/BuildProfileSpec, and for convenience some build tools automatically convert it into (2.), with a warning.)

Failing to build in either of those configurations is a bug, and both could be argued to be either RC or non-RC depending on opinions and priorities, but in practice I think (1.) is going to succeed more often than (2.).

For example #1102605 is an example of a package that FTBFS when we do (2.) but would have succeeded if we do (1.). This is a fairly common failure mode, but I would expect packages that FTBFS in scenario (1.), but do build successfully if their tests had been run, to be very rare.

My current plan for now would be to report them as "important" (using some
usertag)

I think that seems reasonable, but in the template email please be clear about which scenario it was that you tried. Helmut's wording "fails to build from source in unstable when enabling the nocheck build profile" seems good - that unambiguously identifies scenario (2.).

On a personal note, I consider those bugs interesting to fix because I think 
there
should be a safe procedure to build all packages in the archive in a way which 
minimizes
build failures as much as possible.

If that's what you want, I think scenario (1.) is the one that will maximize your number of successful builds, although possibly at the cost of shipping software that compiles but does not work (in ways that build-time tests would have detected). Running build-time tests is a trade-off: it makes us less likely to ship broken software, at the cost of sometimes treating unimportant bugs (either in the software under test, or in the tests themselves) as more serious than they necessarily need to be.

Helmut has been testing scenario (2.) and reporting bugs when it fails, because it's interesting as a way to reduce build-dependencies for bootstrappability and cross-compiling, but the price he pays for that is that he'll sometimes see build failures like #1102605.

    smcv

Reply via email to