Fabian Grünbichler wrote: > rustc currently builds the Rust standard libraries (libcore, liballoc, libstd) > for each of the supported Debian architectures, and additionally also for some > wasm targets, shipped as `libstd-rust-dev-wasm32:all`. This package is :all > because it doesn't match any of the existing Debian architectures, and rustc > of > any arch can use the resulting files to compile for the corresponding targets > (currently `wasm32-unknown-unknown` aka browsers, and `wasm32-p1`/`wasm32-p2` > which use `wasi-libc`, which is similarly "fake" :all). > > We also used to ship windows standard libs built via the mingw toolchain, but > those were dropped in Trixie (and those were not :all anyway, but mapped to > amd64/i386). > > I would like to introduce new packages for (Tier 3) BPF targets > `bpfeb-unknown-none` and `bpfel-unknown-none`, to support use cases like > aya-rs > properly. These exist in a similar void like the wasm ones - they don't really > match any of the Debian architectures, and rustc can build and use them on any > build architecture. Obviously with the caveat that endianness matters here, so > you need to select the right variant of the target matching the platform you > want to run the compiled BPF bytecode on. > > Currently users that want to use those targets with the packaged toolchain > need > hacks to enable building libcore from scratch for that target, which in turn > means a lot of guides push people to use rustup and nightly toolchains, even > though the packaged one would work fine if it would ship a pre-compiled > libcore. > > Is there a better way to handle this or is introducing a new :all package > fine? > Similar questions will arise in the future once 64-bit wasm stabilizes..
:all seems like it might be the right way to handle this, since these builds of `std` depend only on the *target* but doesn't depend at all on the *host* architecture. If hypothetically we *had* some kind of wasm target for Debian, we could potentially represent that as `libstd-rust-dev:wasm32-wasip2`, but we don't have such targets, and it's not *completely* obvious if the concepts map onto each other perfectly. I *think* it works perfectly if everything acts like LLVM's model for cross-compilation, but see below for cases that *don't* work like that. By way of example, given the current architectures Debian has, the mingw-based Rust targets you mentioned would *also* make sense to be `:all`, because they don't actually depend on the host. In theory, we could have an `x86_64-pc-windows-gnu` target for Debian, and have the Rust mingw support be `libstd-rust-dev` for that `x86_64-pc-windows-gnu` target. That would have some advantages, but again, it's not *completely* obvious if the concepts map onto each other perfectly in the non-LLVM model. As a future example that will make this more complicated, in the future we will have Rust targets that use `rustc_codgen_gcc` (AKA `cg_gcc`), which uses `libgccjit` instead of LLVM. GCC doesn't do cross-compilation the way LLVM does, and requires a *unique* build of GCC for every single pair of (target, host). For those cases, I'd expect that those targets will need to build :arch packages for every *host* :arch. For instance, a `cg_gcc`-based target for the hypothetical xyz architecture would need either a `libstd-rust-dev-xyz:all` or `libstd-rust-dev:xyz` package, but it would *also* need a `librust-codegen-gcc-dev-xyz:amd64` package for amd64 hosts, and an `librust-codegen-gcc-dev-xyz:aarch64` package for aarch64 hosts. *That* is the case that makes me say this doesn't perfectly map onto Debian architectures and sometimes requires putting the architecture into the package name instead. I *think* that that problem wouldn't exist if everything followed the LLVM model that doesn't require M*N cross compilers for M hosts and N targtes.

