Re: Reviving schroot as used by sbuild

2024-06-27 Thread Bastian Venthur

On 25.06.24 15:02, Simon McVittie wrote:

I have to ask:

Could we use a container framework that is also used outside the Debian
bubble, rather than writing our own from first principles every time, and
ending up with a single-maintainer project being load-bearing for Debian
*again*? I had hoped that after sbuild's history with schroot becoming
unmaintained, and then being revived by a maintainer-of-last-resort who
is one of the same few people who are critical-path for various other
important things, we would recognise that as an anti-pattern that we
should avoid if we can.


Great proposal!


Here's the Dockerfile/Containerfile to turn a sysroot tarball into an
OCI image (obviously it can be extended with LABELs and other
customizations, but this is fairly close to minimal):

 FROM scratch
 ADD sysroot.tar.gz /
 CMD ["/bin/bash"]


I had the idea to build my Debian packages in a clean docker container 
instead of using cowbuilder etc for some time now. But due to lack of 
time and complexity of available solutions never got really far. Do you 
happen to have a minimal example that would work for most projects and 
does not depend too much on opinionated Debian specific tooling?



Cheers!

Bastian

--
Dr. Bastian Venthur https://venthur.de
Debian Developer venthur at debian org




Re: Reviving schroot as used by sbuild

2024-06-27 Thread Helmut Grohne
Hi Simon,

Thanks for having taken the time to do another extensive writeup. Much
appreciated.

On Wed, Jun 26, 2024 at 06:11:09PM +0100, Simon McVittie wrote:
> On Tue, 25 Jun 2024 at 18:55:45 +0200, Helmut Grohne wrote:
> > The main difference to how everyone else does this is that in a typical
> > sbuild interaction it will create a new user namespace for every single
> > command run as part of the session. sbuild issues tens of commands
> > before launching dpkg-buildpackage and each of them creates new
> > namespaces in the Linux kernel (all of them using the same uid mappings,
> > performing the same bind mounts and so on). The most common way to think
> > of containers is different: You create those namespaces once and reuse
> > the same namespace kernel objects for multiple commands part of the same
> > session (e.g. installation of build dependencies and dpkg-buildpackage).
> 
> Yes. My concern here is that there might be non-obvious reasons why
> everyone else is doing this the other way, which could lead to behavioural
> differences between unschroot and all the others that will come back to
> bite us later.

I do not share this concern (but other concerns of yours). The risk of
behavioural differences is fairly low, because we do not expect any
non-filesystem state to transition from one command to the next. Much to
the contrary, the use of a pid namespace for each command ensures
reliable process cleanup, so no background processes can accidentally
stick around.

I am concerned about behavioural differences due to the reimplementation
from first principles aspect though. Jochen and Aurelien will know more
here, but I think we had a fair number of ftbfs due to such differences.
None of them was due to the architecture of creating a namespaces for
each command and most of them were due to not having gotten right
containers in general. Some were broken packages such as skipping tests
when detecting schroot.

Also note that just because I do not share your concern here does not
imply that I'd be favouring sticking to that architecture. I expressed
elsewhere that I see benefits in changing it for other reasons. At this
point I more and more see this as a non-boolean question. There is a
spectrum between "create namespaces once and use them for the entire
session" and "create new namespaces for each command" and more and more
I start to believe that what would be best for sbuild is somewhere in
between.

> For whole-system containers running an OS image from init upwards,
> or for virtual machines, using ssh as the IPC mechanism seems
> pragmatic. Recent versions of systemd can even be given a ssh public
> key via the systemd.system-credentials(7) mechanism (e.g. on the kernel
> command line) to set it up to be accepted for root logins, which avoids
> needing to do this setup in cloud-init, autopkgtest's setup-testbed,
> or similar.

Another excursion: systemd goes beyond this and also provides the ssh
port via an AF_VSOCK (in case of VMs) or a unix domain socket on the
outside (in case of containers) to make safe discovery of the ssh access
easier.

> For "application" containers like the ones you would presumably want
> to be using for sbuild, presumably something non-ssh is desirable.

I partially concur, but this goes into the larger story I hinted at in
my initial mail. If we move beyond containers and look into building
inside a VM (e.g. sbuild-qemu) we are in a difficult spot, because we
need e.g. systemd for booting, but we may not want it in our build
environment. So long term, I think sbuild will have to differentiate
between three contexts:
 * The system it is being run on
 * The containment or virtualisation environment used to perform the
   build
 * The system where the build is being performed inside the containment
   or virtualisation environment

At present, sbuild does not distinguish the latter two and always treats
them equal. When building inside a VM, we may eventually want to create
a chroot inside the VM to arrive at a minimal environment. The same
technique is applicable to system containers. When doing this, we
minimize the build environment and do not mind the extra ssh dependency
in the container or virtualisation environment. For now though, this is
all wishful thinking. As long as this distinction does not exist, we
pretty much want minimal application containers for building as you
said.

> If you build an image by importing a tarball that you have built in
> whatever way you prefer, minimally something like this:
> 
> $ cat > Dockerfile < FROM scratch
> ADD minbase.tar.gz /
> EOF
> $ podman build -f Dockerfile -t local-debian:sid .

I don't quite understand the need for a Dockerfile here. I suspect that
this is the obvious way that works reliably, but my impression was that
using podman import would be easier. I had success with this:

mmdebstrap --format=tar --variant=apt unstable - | podman import --change 
CMD=/bin/bash - local-debian/sid

> th

BIMI verified email logo for @debian.org addresses

2024-06-27 Thread Blair Noctis
Hi,

[BIMI], or Brand Indicators for Message Identification, is a specification to
let supporting email clients show a brand's logo next to verified email domains.
It would be great to have the Debian logo shown next to @debian.org addresses.

An email service provider implements it by implementing DMARC with a quarantine
or reject policy, as well as adding a TXT record pointing to the logo.

For a quick intro, please see https://postmarkapp.com/blog/what-the-heck-is-bimi

BIMI: https://bimigroup.org/

-- 
Sdrager,
Blair Noctis


pgp5T70ipu9hL.pgp
Description: OpenPGP digital signature


Re: Reviving schroot as used by sbuild

2024-06-27 Thread Reinhard Tartler
On Thu, Jun 27, 2024 at 7:45 AM Helmut Grohne  wrote:

> Please allow for another podman question (and more people than Simon
> know the answer). Every time I run a podman container (e.g. when I run
> autopkgtest) my ~/.local/share/containers grows. I think autopkgtest
> manages to clean up in the end, but e.g. podman run -it ...  seems to
> leave stuff behind.
>

Have you tried 'podman run -it --rm' ?


Re: BIMI verified email logo for @debian.org addresses

2024-06-27 Thread Andrey Rakhmatullin
On Thu, Jun 27, 2024 at 07:29:45PM +0800, Blair Noctis wrote:
> Hi,
> 
> [BIMI], or Brand Indicators for Message Identification, is a specification to
> let supporting email clients show a brand's logo next to verified email 
> domains.
> It would be great to have the Debian logo shown next to @debian.org addresses.
> 
> An email service provider implements it by implementing DMARC with a 
> quarantine
> or reject policy, as well as adding a TXT record pointing to the logo.
> 
> For a quick intro, please see 
> https://postmarkapp.com/blog/what-the-heck-is-bimi
> 
> BIMI: https://bimigroup.org/

(everything I know about BIMI I've learned from
https://16years.secvuln.info/ )


-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: BIMI verified email logo for @debian.org addresses

2024-06-27 Thread gregor herrmann
On Thu, 27 Jun 2024 17:19:11 +0500, Andrey Rakhmatullin wrote:

> On Thu, Jun 27, 2024 at 07:29:45PM +0800, Blair Noctis wrote:
> > [BIMI], or Brand Indicators for Message Identification, is a specification 
> > to
> > let supporting email clients show a brand's logo next to verified email 
> > domains.
> > It would be great to have the Debian logo shown next to @debian.org 
> > addresses.

> (everything I know about BIMI I've learned from
> https://16years.secvuln.info/ )

Hanno also gave a talk at the MiniDebConf in Berlin earlier this
year:
https://berlin2024.mini.debconf.org/talks/17-breaking-dkim-and-bimi-with-the-2008-debian-openssl-bug/

Video at
https://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Berlin/


Cheers,
gregor

-- 
 .''`.  https://info.comodo.priv.at -- Debian Developer https://www.debian.org
 : :' : OpenPGP fingerprint D1E1 316E 93A7 60A8 104D  85FA BB3A 6801 8649 AA06
 `. `'  Member VIBE!AT & SPI Inc. -- Supporter Free Software Foundation Europe
   `-   BOFH excuse #112:  The monitor is plugged into the serial port 



Re: Reviving schroot as used by sbuild

2024-06-27 Thread Simon McVittie
On Thu, 27 Jun 2024 at 11:46:51 +0200, Helmut Grohne wrote:
> I am concerned about behavioural differences due to the reimplementation
> from first principles aspect though. Jochen and Aurelien will know more
> here, but I think we had a fair number of ftbfs due to such differences.
> None of them was due to the architecture of creating a namespaces for
> each command and most of them were due to not having gotten right
> containers in general. Some were broken packages such as skipping tests
> when detecting schroot.

Right - this is an instance of the more general problem pattern, "if we
don't test a thing regularly, we can't assume it works". We routinely
test sbuild+schroot (on the buildds), and individual developers often
try builds without any particular isolation (on development systems or
expendable test systems), but until recently sbuild's unshare backend
was not something that would be routinely tested with most packages,
and similarly most packages are not routinely built with Podman or Docker
or whatever else.

In packages that, themselves, want to do things with containers during
their build or testing (for example bubblewrap and flatpak), there will
typically be a code path for "no particular isolation" that actually
runs the tests (otherwise upstream would not find the tests useful), and
a code path for sbuild+schroot that skips the tests (otherwise they'd
fail on our historical buildds), but the detection that we are in a
locked-down environment where some tests need to be skipped might not
be 100% correct. I know I've had to adjust flatpak's test suite several
times to account for things like detecting whether FUSE works (because
on DSA'd machines it intentionally doesn't, as a security hardening step).

> If we move beyond containers and look into building
> inside a VM (e.g. sbuild-qemu) we are in a difficult spot, because we
> need e.g. systemd for booting, but we may not want it in our build
> environment. So long term, I think sbuild will have to differentiate
> between three contexts:
>  * The system it is being run on
>  * The containment or virtualisation environment used to perform the
>build
>  * The system where the build is being performed inside the containment
>or virtualisation environment

Somewhat prior art for this: https://salsa.debian.org/smcv/vectis uses
a VM (typically running Debian stable), installs sbuild + schroot into it,
and uses sbuild + schroot for the actual build, in an attempt to replicate
the setup of the production buildds on developer machines. In this case
sbuild is in the middle layer instead of the top layer, though.

Similarly, when asked to test packages under lxc (in an attempt to
replicate the setup of ci.debian.net), vectis installs lxc into a VM,
and runs autopkgtest on the VM rather than on the host system.

Of course, I'd prefer it if Debian's production infrastructure was
something that would be easier to replicate "closely enough" on my
development system (such that packages that pass tests on my development
system are very likely to pass tests on the production infra), without
damaging my development system if I use it to build a malicious,
compromised or accidentally-low-quality package that creates side-effects
outside the build environment.

> I don't quite understand the need for a Dockerfile here. I suspect that
> this is the obvious way that works reliably, but my impression was that
> using podman import would be easier.

Honestly, the need for a Dockerfile here is: I already knew how to build
containers from a Dockerfile, and I didn't read the documentation for
the lower-level `podman import` because `podman build` can already do
what I needed.

I see this as the same design principle as why we encourage package
maintainers to use dh, even when building trivial "toy" packages like
hello, and in preference to implementing debian/rules at a lower level
in trivial cases. To build a non-trivial container with multiple layers,
you'll likely need a Dockerfile (or docker-compose, or some similar thing)
*anyway*, so a typical user expectation will be to have a Dockerfile, and
anyone building a container will likely already have learned the basics
of how to write one; and then we might as well follow the same procedure
in the trivial case, rather than having the trivial case be different and
require different knowledge.

> > $ autopkgtest -U hello*.dsc -- podman localhost/local-debian:sid
> 
> This did not work for me. autopkgtest failed to create a user account.

Please report a bug against autopkgtest with steps to reproduce. It worked
for me, on Debian 12 with a local git checkout of autopkgtest, and it's
probably something that ought to work - although it's always going to be
non-optimal, because it will waste a bunch of time doing basic setup like
installing dpkg-dev and configuring the apt proxy before every test. The
reason why we have autopkgtest-build-podman is to do that setup fewer
times, cache the result, and amortiz

Re: BIMI verified email logo for @debian.org addresses

2024-06-27 Thread Timo Röhling

* gregor herrmann  [2024-06-27 14:50]:

Hanno also gave a talk at the MiniDebConf in Berlin earlier this
year:
https://berlin2024.mini.debconf.org/talks/17-breaking-dkim-and-bimi-with-the-2008-debian-openssl-bug/

Video at
https://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Berlin/


TL;DW: 26:02 to 26:20
https://laotzu.ftp.acc.umu.se/pub/debian-meetings/2024/MiniDebConf-Berlin/33-breaking-dkim-and-bimi-with-the-2008-debian-openssl-bug.lq.webm#t=1562,1580

--
⢀⣴⠾⠻⢶⣦⠀   ╭╮
⣾⠁⢠⠒⠀⣿⡁   │ Timo Röhling   │
⢿⡄⠘⠷⠚⠋⠀   │ 9B03 EBB9 8300 DF97 C2B1  23BF CC8C 6BDD 1403 F4CA │
⠈⠳⣄   ╰╯


signature.asc
Description: PGP signature


Re: Reviving schroot as used by sbuild

2024-06-27 Thread Johannes Schauer Marin Rodrigues
Hi,

Quoting Simon McVittie (2024-06-27 15:59:01)
> On Thu, 27 Jun 2024 at 11:46:51 +0200, Helmut Grohne wrote:
> > I don't quite understand the need for a Dockerfile here. I suspect that
> > this is the obvious way that works reliably, but my impression was that
> > using podman import would be easier.
> 
> Honestly, the need for a Dockerfile here is: I already knew how to build
> containers from a Dockerfile, and I didn't read the documentation for
> the lower-level `podman import` because `podman build` can already do
> what I needed.
> 
> I see this as the same design principle as why we encourage package
> maintainers to use dh, even when building trivial "toy" packages like
> hello, and in preference to implementing debian/rules at a lower level
> in trivial cases. To build a non-trivial container with multiple layers,
> you'll likely need a Dockerfile (or docker-compose, or some similar thing)
> *anyway*, so a typical user expectation will be to have a Dockerfile, and
> anyone building a container will likely already have learned the basics
> of how to write one; and then we might as well follow the same procedure
> in the trivial case, rather than having the trivial case be different and
> require different knowledge.

I have never in my life written a Dockerfile and so far I've only used "podman
import" instead. Your explanation makes sense to me. I had no idea that "podman
build" is on a higher plumbing level. As a container noob it was always more
easy for me to write:

mmdebstrap [my customizations] unstable | podman import - debian

If I understand what you are saying, then what should instead be done is to
write a Dockerfile receiving a vanilla tarball and then do the customizations
via the Dockerfile?

Can a Dockerfile be read from stdin? It's a small wrinkle to me that I would
then need to create a private temporary directory with a Dockerfile first
instead of just shoving it in over a pipe.

> > Do I understand correctly that in this variant, you intend to use podman
> > without its image management capabilities and rather just use --rootfs
> > spawning two podman containers on the same --rootfs (one after another)
> > where the first one installs dependencies and the second one isolates the
> > network for building?
> 
> Maybe that; or maybe use its image management, tell the first podman command
> not to delete the container's root filesystem (don't use --rm), and then
> there's probably a way to tell podman to reuse the resulting filesystem
> with an additional layer in its overlayfs for the network-isolated run.
> 
> Please note that I am far from being an expert on podman or the
> "containers" family of libraries that it is based on, and I don't
> know everything it is capable of. Because Debian has a lot of pieces
> of infrastructure we have built for ourselves from first principles,
> I've had to spend time on understanding the finer points of sbuild,
> schroot, lxc and so on, so that I can replicate failure modes seen on
> the buildds and therefore fix release-critical bugs in the packages that
> I've taken responsibility for (and occasionally also try to improve the
> infrastructure itself, for example #856877 which recently passed its
> 7th birthday). That comes with an opportunity cost: the time I spent
> learning about schroot is time that I didn't spend learning about OCI.
> 
> One of the reasons I would like to have fewer Debian-specific pieces in
> our stack is so that other Debian developers don't have to do what I
> did, and can instead spend their time gaining transferrable knowledge
> that will be equally useful inside and outside the Debian bubble (for
> example the best ways to use OCI images, and OCI-based tools like
> Docker and Podman, which have a lot of overlap in how they are used even
> though they are rather different behind the scenes).

Thank you for this text as well as the one in your initial email in which you
caution against more Debian-isms with only very few maintainer(s) maintaining
them. As the author of the unshare backend I am guilty of having added another
Debian-specific thing instead of re-using existing solutions. Maybe my defense
can be that when I wrote that code in 2018, there was no podman in Debian yet?
I am not attached to the unshare code. I gladly throw it out for something
better. The less code I have to maintain the better for me. I do not dislike
podman either and I am happy that in contrast to docker, there is no persistent
service running in the background.

What I wanted to mainly bring up in this email are the following things:

Creating build chroots from things that are signed with the Debian archive
keyring is important to me. Even though, as Holger pointed out, the Debian
images that one can download can be reproduced independently, I rather make
sure that I receive what I think I receive by relying on creating my chroot via
mmdebstrap/apt verified by my local keyring. Maybe in the future debian.org can
publish build chroo

Re: Reviving schroot as used by sbuild

2024-06-27 Thread Simon McVittie
On Thu, 27 Jun 2024 at 17:26:20 +0200, Johannes Schauer Marin Rodrigues wrote:
> But, if everybody is so excited about this, where are the sbuild contributors
> implementing this?

I'm sorry, consider it added it to my list. As usual, there's no guarantee
that I will get there within my lifetime, but I'll make sure to feel
suitably guilty about my failure to achieve it.

But, having said that:

> The excitement can probably also
> be seen by there existing 13 independent software packages that do "debian
> package building in docker"

The reason we don't have 14, one of them from me[1], is the same reason
I would be reluctant to develop a new sbuild backend without knowing that
it's what the maintainers of our production infrastructure want to use:

Packages are de facto unreleasable (which is effectively a higher severity
than any RC bug) if they don't compile successfully and pass tests in the
project's official build environment. Until recently, this meant stable's
sbuild and schroot (or sometimes oldstable's sbuild and schroot) entering
an unstable chroot; more recently, some official buildds switched to the
unshare backend, resulting in build failures in that backend becoming
worse-than-RC too.

If I do my test-builds in sbuild + schroot in an (old?)stable VM[3], and
they succeed, then I can be somewhat confident that when I do the upload,
the build on the official buildds will succeed too (at least on x86).

If I do my test-builds in some other way, for example directly in a VM,
or in podman, docker, lxc, pbuilder, deb-build-snapshot or whatever other
thing I might personally prefer or find more convenient, then I run the
risk of having my upload fail to build on the official buildds for a
schroot-specific reason, which of course is an unacceptable situation for
which I would rightly be held responsible; and step 1 of resolving that
situation would be to try to replicate the official build environment,
so I might as well save some time by *already* attempting to replicate
the official build environment. A lot of my Debian contributions are
already guilt-based ("if I don't get this uploaded then $bug is in
some way my fault"), and I'm sorry but I am reluctant to add to that
by creating new and avoidable opportunities to fail to live up to the
project's expectations.

Ideally of course I should do my test-builds in *both* sbuild + schroot
and whatever container technology I'm (hypothetically) proposing as
the new production infrastructure, but then each package I release will
take twice as long per attempt to release, and "smcv takes too long to
release important fixes" is a failure mode that cannot be fixed by any
number of additional QA checks.

Until recently, my understanding is that DSA's policy was to lock
down all official machines by preventing unprivileged creation of user
namespaces system-wide, which rules out podman, making it a poor time
investment. This is clearly not entirely true any more, because if
it was, buildds would not be able to use sbuild's unshare backend -
so perhaps now is the time to be proposing a sbuild podman backend,
and I should probably be writing one instead of replying to this message.

Arguably there is already a sbuild podman backend, albeit indirectly:
tell sbuild to use an autopkgtest virt server, and then specify the
podman virt server as the one to use. (This has the limitation that it
can't use the network to install build-dependencies and then disable
networking for the actual build, which is a limitation that it shares
with the current schroot backend.) As I mentioned in another thread,
unfortunately I have spent considerably less time on podman in autopkgtest
than it deserves: I have not tested it recently, so it's entirely possible
that it doesn't work. If that's the case, then I apologise.

I'm sorry that I have failed to provide a concrete solution to this
problem, and I will try to do better in future.

smcv

[1] Arguably we *do* have 14, one of them from me, because
deb-build-snapshot[2] has an "in Docker"/"in Podman" mode - although
deb-build-snapshot primarily exists to automate generation of labelled
snapshot test-builds for manual testing, and the fact that it has a
"build over there" mode is only a side-effect. It isn't intended
for production use (for example it always builds both arch-dep and
arch-indep binary packages, which of course is an unacceptably lazy
shortcut for production or QA use) and I don't maintain it with a
production-quality level of service, which is why there is no ITP
and also no wishlist bug against devscripts. I am sorry that this
tool does not yet meet the project's quality standards.

[2] https://salsa.debian.org/smcv/deb-build-snapshot

[3] ... and replicate all the other behaviours that the buildds
have, such as setting an unreachable home directory, building
:any and :all separately, and choosing the same undocumented apt
resolver for experimental and backports 

Re: Reviving schroot as used by sbuild

2024-06-27 Thread Simon McVittie
On Wed, 26 Jun 2024 at 18:05:15 -0400, Reinhard Tartler wrote:
> I imagine that one could whip up some kind of wrapper
> that is building a container either from a tarball created via mmboostrap or
> similar
> using buildah, have it install all necessary build dependencies, and then use
> podman to run the actual build

Yes, one could, and many have; but not (as far as I know) within the
framework of sbuild, in a way that might be considered acceptable by the
operators of our official buildds.

> I also briefly started playing with debcraft, which I really like from a
> usability perspective

On Thu, 27 Jun 2024 at 10:52:27 +0200, Bastian Venthur wrote:
> I had the idea to build my Debian packages in a clean docker container
> instead of using cowbuilder etc for some time now.

There are lots of options for doing this, some of which are listed in
.

All of these have the same problem as cowbuilder, pbuilder, and any
other solution that is not sbuild + schroot: it isn't (currently) what
the production Debian buildds use, therefore it is entirely possible
(perhaps even likely, depending on what packages you maintain) that your
package will build successfully and pass tests in your own local builder,
but then fail to build or fail tests on the buildds as a result of some
quirk of how schroot sets up its chroots, which is a worse-than-RC bug
making the package unreleasable.

I'm sure that a better maintainer than me could avoid this source
of stress by simply recognising situations that could cause a build
failure before they happen, and ensuring that no mistakes are made;
but unfortunately the only way I have found to be able to be somewhat
confident that my packages will build successfully in the real Debian
infrastructure, within my own limitations, is to replicate a real
Debian buildd (to the best of my ability) and use that replica for
my test-builds.

smcv



Re: Reviving schroot as used by sbuild

2024-06-27 Thread Johannes Schauer Marin Rodrigues
Simon,

Quoting Simon McVittie (2024-06-27 19:16:54)
> On Thu, 27 Jun 2024 at 17:26:20 +0200, Johannes Schauer Marin Rodrigues wrote:
> > But, if everybody is so excited about this, where are the sbuild 
> > contributors
> > implementing this?
> 
> I'm sorry, consider it added it to my list. As usual, there's no guarantee
> that I will get there within my lifetime, but I'll make sure to feel
> suitably guilty about my failure to achieve it.

if you want to do me a favour, please do not put it on your todo list. Even
more importantly: please try to not feel guilty for anything. If at all
possible, I'd like to assure you that you were not even close to being on the
list of people (if we imagine that such a list existed in the first place) that
I would make responsible.

> This is clearly not entirely true any more, because if it was, buildds would
> not be able to use sbuild's unshare backend - so perhaps now is the time to
> be proposing a sbuild podman backend, and I should probably be writing one
> instead of replying to this message.

Or you let other people take care of it. There are more than a dozen attempts
outside of sbuild. How hard can it be? I consider you one of the most capable
and clever people in the project and I greatly value your input into this
discussion. But were I to choose where to put your time, it would not be into
stretching your resources even more thinly by becoming the sbuild+podman
maintainer. If you are really eager I do not want to stop you either. But
please, please do not feel pressured by my last email.

> I'm sorry that I have failed to provide a concrete solution to this problem,
> and I will try to do better in future.

Please accept my apology for how I phrased my last email. I did not want you to
feel sorry for anything.

I'm sincerely sorry. I did not mean to make you feel guilty.

Sorry.

josch

signature.asc
Description: signature


Builds that pass locally but fail on sbuild? (Re: Reviving schroot as used by sbuild)

2024-06-27 Thread Otto Kekäläinen
Hi Simon!

> There are lots of options for doing this, some of which are listed in
> .
>
> All of these have the same problem as cowbuilder, pbuilder, and any
> other solution that is not sbuild + schroot: it isn't (currently) what
> the production Debian buildds use, therefore it is entirely possible
> (perhaps even likely, depending on what packages you maintain) that your
> package will build successfully and pass tests in your own local builder,
> but then fail to build or fail tests on the buildds as a result of some
> quirk of how schroot sets up its chroots, which is a worse-than-RC bug
> making the package unreleasable.

Could you point me to some Debian Bug # or otherwise share examples of
cases when a build succeeded locally but failed on official Debian
builders due to something that is specific for sbuild/schroot?

I have never run in such a situation despite doing Debian packaging
for 10 years with fairly complex C++ software targeting all archs
Debian supports. Also as a member of the Salsa-CI team I don't recall
ever seeing a bug report about something built on Salsa in a container
successfully but failed to build on actual buildd.

I am not dismissive of your claim - as a very senior DD you surely
have those experiences - I am just curious to learn what those cases
might have been.

I could imagine that buildd builds fail if they the source was
prepared in a non-hermetic environment that ran as root, or had
network access, or if build environment was unclean and debian/control
was missing some dependencies, but that is elementary hermetic build
environment properties and not inherently something that *only*
sbuild/schroot does.

Related, you might want to take a peek at the source code of
https://salsa.debian.org/otto/debcraft how it supports both Podman and
Docker, and how it generates the 'root.tar.gz' equivalent container
automatically based on debian/control and debian/changelog contents,
and then runs the actual build as a regular non-root user in a
container that has no network access. If I learn about other
requirements for a hermetic build environment I would be happy to
incorporate it.

- Otto



Bug#1074405: ITP: fibocom-pc-services -- Services to support WWAN modules manufactured by Fibocom Wireless Inc.

2024-06-27 Thread Kai-Chuan Hsieh
Package: wnpp
Severity: wishlist
Owner: Kai-Chuan Hsieh 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: fibocom-pc-services
  Version : 1.0.10
  Upstream Author : zhaoziqi 
* URL : https://github.com/fibocom-pc/linux_apps_opensource
* License : GPL, MIT
  Programming Lang: C
  Description : Services to support WWAN modules manufactured by Fibocom 
Wireless Inc.

The software package was created by Fibocom Wireless Inc. to support its WWAN 
modules. The
upstream source intend to provide services for systemd compatible Linux 
distributions.

The services provided fccunlock, SAR configuration, firmware switch, and 
firmware upgrade
functions for utilizing the company's WWAN modules.

It utilizes fastboot for firmware operations, which are firmware switch and 
firmware upgrade.
For fccunlock and SAR configuration, the service will send AT command to the 
serial port directly.
As I know, there is no package provides the service for the vendor.

The reason I need to package it is because I work for certifying Ubuntu Desktop 
image for OEM customer,
I need to package the required service and make it be in Debian and Ubuntu 
archive.

I try to do packaging for the upstream in 
https://salsa.debian.org/kchsieh/fibocom-pc-services/,
in the debian/sid branch. I'll use the repo to maintain the code by using 
salsa-ci-team pipeline to capture
failure when new upstream release is available.

I need sponsor for the package.