On 1/27/26 3:51 PM, Alex Bennée wrote:
Pierrick Bouvier <[email protected]> writes:
On 1/27/26 10:15 AM, Alex Bennée wrote:
Pierrick Bouvier <[email protected]> writes:
On 1/27/26 1:28 AM, Alex Bennée wrote:
Richard Henderson <[email protected]> writes:
On 1/27/26 10:50, Alex Bennée wrote:
I think it was broken by 4203ea0247f (gitlab-ci: Add build tests for
wasm64) because we base the tests of the existence of dockerfiles and it
now generates multiple targets.
Do we actually use the wasm32 stuff anymore? Maybe we can just
rename it?
All of the wasm32 stuff is supposed to be gone.
I think [email protected] should fix
it,
but I need to re-read how to trigger the weekly build on my test branch
to test it.
r~
What if docker-verify was depending on building the container locally
instead? It would allow to have a self-contained command, that works
the same in local, or in CI, without any yaml dependencies.
With current approach, it's tricky to reproduce locally as it depends
on global gitlab registry, while we are just trying to see if images
is buildable from scratch or not.
No - the point of verify was to check we where building in the
registry.
Hum, I'm not sure what "building in the registry" means. Containers
are always built locally then (optionally) pushed to a given registry.
Yes - and this checks that the registry is up to date. We don't want to
re-build every time if it can cache. This benefits the CI and the
downstream users that pull the images.
Of course, and reusing layers is a good way to optimize our CI, local
dev workflow, and reduce pressure on distro servers.
The point I was talking about is related to this weekly-job only, which
we specifically want to be without all those layers, to test if image is
still buildable from scratch.
If you want to test something useful, it has to be a build from
scratch (docker build --no-cache ...).
A lot of the Makefile.include stuff is legacy now (although its a useful
wrapper for building tests locally) because gitlab invokes the
containers directly. We used to do all sorts of dependency handling as
well but the lcitool approach is a lot cleaner even if we don't get
layered containers.
The commit message for 8bec7b9874 is:
```
gitlab: add a weekly container building job
This will hopefully catch containers that break because of upstream
changes as well as keep the container cache fresh.
```
IMHO, a simple weekly job doing what we want is:
docker build --no-cache - < $dockerfile
The containers are built by the dependencies:
needs:
# core
- amd64-centos9-container
- amd64-fedora-container
# cross
- amd64-debian-cross-container
- amd64-debian-user-cross-container
- amd64-debian-legacy-cross-container
- arm64-debian-cross-container
- hexagon-cross-container
- loongarch-debian-cross-container
- mips64el-debian-cross-container
- ppc64el-debian-cross-container
- riscv64-debian-cross-container
- s390x-debian-cross-container
- tricore-debian-cross-container
- xtensa-debian-cross-container
- win64-fedora-cross-container
- wasm64-emsdk-cross-container
# containers
- amd64-alpine-container
- amd64-debian-container
- amd64-ubuntu2204-container
- amd64-opensuse-leap-container
- python-container
- amd64-fedora-rust-nightly-container
I don't see what is the relation with gitlab, lcitool or Makefile.
Maybe I miss some extra insights or goals that were not documented in
original commit, but it seems like the current implementation is not
aware of what it's supposed to do.
We occasionally see failures due to upstream distro changes. By having a
separate job we can see if it happens independent of testing staging
trees.
Yes, and this happens a new divergent layer is added, mostly when we
modify dockerfiles themselves with lcitool.
From our container template:
https://gitlab.com/qemu-project/qemu/-/blob/master/.gitlab-ci.d/container-template.yml
```
.container_job_template:
...
export TAG="$CI_REGISTRY_IMAGE/qemu/$NAME:$QEMU_CI_CONTAINER_TAG"
export COMMON_TAG="$CI_REGISTRY/qemu-project/qemu/qemu/$NAME:latest"
...
docker build --tag "$TAG" --cache-from "$TAG" --cache-from "$COMMON_TAG"
--build-arg BUILDKIT_INLINE_CACHE=1
$BUILD_ARGS
-f "tests/docker/dockerfiles/$DOCKERFILE_NAME.docker" "."
```
--cache-from makes that we reuse the existing layers, so the weekly job
will not expose anything we don't already saw through regular PR pipelines.
You need to build with --no-cache, to make sure docker builds from
scratch. This is not what current weekly job does.
As we talked about in private our conversation with Alex, the two
solutions are:
1. make weekly-container use `docker build --no-cache directly - <
/path/to/dockerfile`, without pushing any result (and not trashing
everyone's cached layers).
2. extend complex yaml machinery to conditionally use cache or not, and
push or not resulting image.
IMHO, 1 is much simpler, and can even be turned into a daily job, so we
can see daily what's the status of upstream distros, and know as soon as
possible when they are fixed. As well, skopeo can be ditched since it's
not checking anything useful, except if image is available in gitlab
registry, which we know is by design.
I suspect Alex might prefer 2. and I won't push against it, but let him
debug the yaml pile :)
Pierrick